Test Report: Docker_Linux_crio 17585

                    
                      ea770f64c27c5646b2ec1dfcd286218478f671de:2023-11-07:31788
                    
                

Test fail (7/308)

Order failed test Duration
28 TestAddons/parallel/Ingress 154.18
107 TestFunctional/parallel/License 0.2
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.65
159 TestIngressAddonLegacy/serial/ValidateIngressAddons 182.77
209 TestMultiNode/serial/PingHostFrom2Pods 3.01
230 TestRunningBinaryUpgrade 69.32
249 TestStoppedBinaryUpgrade/Upgrade 114.01
x
+
TestAddons/parallel/Ingress (154.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-890770 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-890770 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-890770 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f29d5df0-addc-4c11-8b42-55d5eec67015] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f29d5df0-addc-4c11-8b42-55d5eec67015] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.009233259s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-890770 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.552136978s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-890770 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-890770 addons disable ingress-dns --alsologtostderr -v=1: (1.091253993s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-890770 addons disable ingress --alsologtostderr -v=1: (7.624056337s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-890770
helpers_test.go:235: (dbg) docker inspect addons-890770:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8157f8cfbc48a05567b045e64347713dcb1771dfe0057f8640598ef89f485011",
	        "Created": "2023-11-07T23:02:12.491067575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 17870,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:02:12.818634617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dbc648475405a75e8c472743ce721cb0b74db98d9501831a17a27a54e2bd3e47",
	        "ResolvConfPath": "/var/lib/docker/containers/8157f8cfbc48a05567b045e64347713dcb1771dfe0057f8640598ef89f485011/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8157f8cfbc48a05567b045e64347713dcb1771dfe0057f8640598ef89f485011/hostname",
	        "HostsPath": "/var/lib/docker/containers/8157f8cfbc48a05567b045e64347713dcb1771dfe0057f8640598ef89f485011/hosts",
	        "LogPath": "/var/lib/docker/containers/8157f8cfbc48a05567b045e64347713dcb1771dfe0057f8640598ef89f485011/8157f8cfbc48a05567b045e64347713dcb1771dfe0057f8640598ef89f485011-json.log",
	        "Name": "/addons-890770",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-890770:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-890770",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f0f18bcaa00d82873ff67f9b61f838c0d8261337ab1626e2dfeed9070cc30cb9-init/diff:/var/lib/docker/overlay2/ae2a32444c6a9314aa09825baf7df8a89e3a23e782d3f3ba648a13de53e3f1b1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f0f18bcaa00d82873ff67f9b61f838c0d8261337ab1626e2dfeed9070cc30cb9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f0f18bcaa00d82873ff67f9b61f838c0d8261337ab1626e2dfeed9070cc30cb9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f0f18bcaa00d82873ff67f9b61f838c0d8261337ab1626e2dfeed9070cc30cb9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-890770",
	                "Source": "/var/lib/docker/volumes/addons-890770/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-890770",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-890770",
	                "name.minikube.sigs.k8s.io": "addons-890770",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "732fba7ef137b235a85131cec4bcef6f5658bb225d7b80185a1daf893e5928df",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/732fba7ef137",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-890770": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8157f8cfbc48",
	                        "addons-890770"
	                    ],
	                    "NetworkID": "82da66fb1970dd8cf2fe0a8004844531478a9af7e37d73581ba9b920d555d851",
	                    "EndpointID": "82465f489e291caf4bcf2a6eb3de982fb39c4f85843e4d4c61bd76e2cd55fca2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-890770 -n addons-890770
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-890770 logs -n 25: (1.244242708s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-778371                                                                     | download-only-778371   | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC | 07 Nov 23 23:01 UTC |
	| delete  | -p download-only-778371                                                                     | download-only-778371   | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC | 07 Nov 23 23:01 UTC |
	| start   | --download-only -p                                                                          | download-docker-849450 | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC |                     |
	|         | download-docker-849450                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-849450                                                                   | download-docker-849450 | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC | 07 Nov 23 23:01 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-173782   | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC |                     |
	|         | binary-mirror-173782                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41013                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-173782                                                                     | binary-mirror-173782   | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC | 07 Nov 23 23:01 UTC |
	| addons  | enable dashboard -p                                                                         | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC |                     |
	|         | addons-890770                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC |                     |
	|         | addons-890770                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-890770 --wait=true                                                                | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC | 07 Nov 23 23:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:04 UTC | 07 Nov 23 23:04 UTC |
	|         | addons-890770                                                                               |                        |         |         |                     |                     |
	| addons  | addons-890770 addons                                                                        | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:04 UTC | 07 Nov 23 23:04 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-890770 ssh cat                                                                       | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:04 UTC | 07 Nov 23 23:04 UTC |
	|         | /opt/local-path-provisioner/pvc-fbfec044-5f57-4c8e-aafa-666d902b4ff6_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-890770 addons disable                                                                | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:04 UTC | 07 Nov 23 23:05 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-890770 ip                                                                            | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:04 UTC | 07 Nov 23 23:04 UTC |
	| addons  | addons-890770 addons disable                                                                | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:04 UTC | 07 Nov 23 23:04 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:04 UTC | 07 Nov 23 23:04 UTC |
	|         | -p addons-890770                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-890770 addons disable                                                                | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:04 UTC | 07 Nov 23 23:04 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:04 UTC | 07 Nov 23 23:04 UTC |
	|         | addons-890770                                                                               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:04 UTC | 07 Nov 23 23:04 UTC |
	|         | -p addons-890770                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-890770 ssh curl -s                                                                   | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-890770 addons                                                                        | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:05 UTC | 07 Nov 23 23:05 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-890770 addons                                                                        | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:05 UTC | 07 Nov 23 23:05 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-890770 ip                                                                            | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:07 UTC | 07 Nov 23 23:07 UTC |
	| addons  | addons-890770 addons disable                                                                | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:07 UTC | 07 Nov 23 23:07 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-890770 addons disable                                                                | addons-890770          | jenkins | v1.32.0 | 07 Nov 23 23:07 UTC | 07 Nov 23 23:07 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:01:49
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:01:49.671952   17192 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:01:49.672073   17192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:01:49.672085   17192 out.go:309] Setting ErrFile to fd 2...
	I1107 23:01:49.672091   17192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:01:49.672279   17192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	I1107 23:01:49.672891   17192 out.go:303] Setting JSON to false
	I1107 23:01:49.673655   17192 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2660,"bootTime":1699395450,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:01:49.673711   17192 start.go:138] virtualization: kvm guest
	I1107 23:01:49.676166   17192 out.go:177] * [addons-890770] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:01:49.677712   17192 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:01:49.677699   17192 notify.go:220] Checking for updates...
	I1107 23:01:49.679334   17192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:01:49.680891   17192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:01:49.682521   17192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	I1107 23:01:49.684062   17192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:01:49.685768   17192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:01:49.687550   17192 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:01:49.708011   17192 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:01:49.708113   17192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:01:49.758218   17192 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-11-07 23:01:49.749751689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:01:49.758363   17192 docker.go:295] overlay module found
	I1107 23:01:49.760334   17192 out.go:177] * Using the docker driver based on user configuration
	I1107 23:01:49.761777   17192 start.go:298] selected driver: docker
	I1107 23:01:49.761791   17192 start.go:902] validating driver "docker" against <nil>
	I1107 23:01:49.761801   17192 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:01:49.762589   17192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:01:49.814277   17192 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-11-07 23:01:49.806218747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:01:49.814427   17192 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:01:49.814646   17192 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:01:49.816621   17192 out.go:177] * Using Docker driver with root privileges
	I1107 23:01:49.818164   17192 cni.go:84] Creating CNI manager for ""
	I1107 23:01:49.818183   17192 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:01:49.818196   17192 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:01:49.818206   17192 start_flags.go:323] config:
	{Name:addons-890770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-890770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:01:49.819821   17192 out.go:177] * Starting control plane node addons-890770 in cluster addons-890770
	I1107 23:01:49.821308   17192 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:01:49.822793   17192 out.go:177] * Pulling base image ...
	I1107 23:01:49.824308   17192 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:01:49.824346   17192 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:01:49.824355   17192 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1107 23:01:49.824363   17192 cache.go:56] Caching tarball of preloaded images
	I1107 23:01:49.824455   17192 preload.go:174] Found /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1107 23:01:49.824467   17192 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:01:49.824823   17192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/config.json ...
	I1107 23:01:49.824845   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/config.json: {Name:mkd874fe3e9d280cf77cc0976a17bb60904f128c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:01:49.839074   17192 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:01:49.839206   17192 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1107 23:01:49.839225   17192 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
	I1107 23:01:49.839231   17192 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
	I1107 23:01:49.839245   17192 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1107 23:01:49.839256   17192 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1107 23:02:01.637627   17192 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from cached tarball
	I1107 23:02:01.637701   17192 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:02:01.637744   17192 start.go:365] acquiring machines lock for addons-890770: {Name:mk0baa57d0023d6f3c848abdb975c269b35bc8a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:02:01.637869   17192 start.go:369] acquired machines lock for "addons-890770" in 97.771µs
	I1107 23:02:01.637899   17192 start.go:93] Provisioning new machine with config: &{Name:addons-890770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-890770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:02:01.638031   17192 start.go:125] createHost starting for "" (driver="docker")
	I1107 23:02:01.640307   17192 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1107 23:02:01.640610   17192 start.go:159] libmachine.API.Create for "addons-890770" (driver="docker")
	I1107 23:02:01.640645   17192 client.go:168] LocalClient.Create starting
	I1107 23:02:01.640808   17192 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem
	I1107 23:02:01.755721   17192 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem
	I1107 23:02:02.228090   17192 cli_runner.go:164] Run: docker network inspect addons-890770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 23:02:02.244024   17192 cli_runner.go:211] docker network inspect addons-890770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 23:02:02.244109   17192 network_create.go:281] running [docker network inspect addons-890770] to gather additional debugging logs...
	I1107 23:02:02.244134   17192 cli_runner.go:164] Run: docker network inspect addons-890770
	W1107 23:02:02.259384   17192 cli_runner.go:211] docker network inspect addons-890770 returned with exit code 1
	I1107 23:02:02.259412   17192 network_create.go:284] error running [docker network inspect addons-890770]: docker network inspect addons-890770: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-890770 not found
	I1107 23:02:02.259423   17192 network_create.go:286] output of [docker network inspect addons-890770]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-890770 not found
	
	** /stderr **
	I1107 23:02:02.259507   17192 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:02:02.276980   17192 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0027d7bc0}
	I1107 23:02:02.277027   17192 network_create.go:124] attempt to create docker network addons-890770 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 23:02:02.277071   17192 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-890770 addons-890770
	I1107 23:02:02.329923   17192 network_create.go:108] docker network addons-890770 192.168.49.0/24 created
	I1107 23:02:02.329953   17192 kic.go:121] calculated static IP "192.168.49.2" for the "addons-890770" container
	I1107 23:02:02.330034   17192 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 23:02:02.348716   17192 cli_runner.go:164] Run: docker volume create addons-890770 --label name.minikube.sigs.k8s.io=addons-890770 --label created_by.minikube.sigs.k8s.io=true
	I1107 23:02:02.366896   17192 oci.go:103] Successfully created a docker volume addons-890770
	I1107 23:02:02.366982   17192 cli_runner.go:164] Run: docker run --rm --name addons-890770-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-890770 --entrypoint /usr/bin/test -v addons-890770:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 23:02:07.223779   17192 cli_runner.go:217] Completed: docker run --rm --name addons-890770-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-890770 --entrypoint /usr/bin/test -v addons-890770:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (4.856730939s)
	I1107 23:02:07.223811   17192 oci.go:107] Successfully prepared a docker volume addons-890770
	I1107 23:02:07.223842   17192 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:02:07.223865   17192 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 23:02:07.223928   17192 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-890770:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 23:02:12.420514   17192 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-890770:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.196537257s)
	I1107 23:02:12.420550   17192 kic.go:203] duration metric: took 5.196683 seconds to extract preloaded images to volume
	W1107 23:02:12.420696   17192 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 23:02:12.420801   17192 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 23:02:12.476076   17192 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-890770 --name addons-890770 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-890770 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-890770 --network addons-890770 --ip 192.168.49.2 --volume addons-890770:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:02:12.827660   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Running}}
	I1107 23:02:12.846975   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:12.867656   17192 cli_runner.go:164] Run: docker exec addons-890770 stat /var/lib/dpkg/alternatives/iptables
	I1107 23:02:12.920566   17192 oci.go:144] the created container "addons-890770" has a running status.
	I1107 23:02:12.920594   17192 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa...
	I1107 23:02:13.016808   17192 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 23:02:13.037477   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:13.054319   17192 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 23:02:13.054339   17192 kic_runner.go:114] Args: [docker exec --privileged addons-890770 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 23:02:13.114159   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:13.135978   17192 machine.go:88] provisioning docker machine ...
	I1107 23:02:13.136022   17192 ubuntu.go:169] provisioning hostname "addons-890770"
	I1107 23:02:13.136099   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:13.153001   17192 main.go:141] libmachine: Using SSH client type: native
	I1107 23:02:13.153356   17192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1107 23:02:13.153375   17192 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-890770 && echo "addons-890770" | sudo tee /etc/hostname
	I1107 23:02:13.155106   17192 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51326->127.0.0.1:32772: read: connection reset by peer
	I1107 23:02:16.282387   17192 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-890770
	
	I1107 23:02:16.282481   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:16.300129   17192 main.go:141] libmachine: Using SSH client type: native
	I1107 23:02:16.300617   17192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1107 23:02:16.300646   17192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-890770' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-890770/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-890770' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:02:16.415953   17192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:02:16.415987   17192 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9432/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9432/.minikube}
	I1107 23:02:16.416013   17192 ubuntu.go:177] setting up certificates
	I1107 23:02:16.416024   17192 provision.go:83] configureAuth start
	I1107 23:02:16.416081   17192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-890770
	I1107 23:02:16.432800   17192 provision.go:138] copyHostCerts
	I1107 23:02:16.432868   17192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem (1078 bytes)
	I1107 23:02:16.432989   17192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem (1123 bytes)
	I1107 23:02:16.433049   17192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem (1675 bytes)
	I1107 23:02:16.433090   17192 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem org=jenkins.addons-890770 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-890770]
	I1107 23:02:16.480365   17192 provision.go:172] copyRemoteCerts
	I1107 23:02:16.480415   17192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:02:16.480445   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:16.496810   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:16.588287   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:02:16.612065   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1107 23:02:16.633782   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:02:16.655315   17192 provision.go:86] duration metric: configureAuth took 239.275822ms
	I1107 23:02:16.655349   17192 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:02:16.655549   17192 config.go:182] Loaded profile config "addons-890770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:02:16.655699   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:16.672272   17192 main.go:141] libmachine: Using SSH client type: native
	I1107 23:02:16.672627   17192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1107 23:02:16.672645   17192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:02:16.878461   17192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:02:16.878492   17192 machine.go:91] provisioned docker machine in 3.742487717s
	I1107 23:02:16.878501   17192 client.go:171] LocalClient.Create took 15.237842605s
	I1107 23:02:16.878515   17192 start.go:167] duration metric: libmachine.API.Create for "addons-890770" took 15.237908025s
	I1107 23:02:16.878522   17192 start.go:300] post-start starting for "addons-890770" (driver="docker")
	I1107 23:02:16.878531   17192 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:02:16.878587   17192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:02:16.878631   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:16.896393   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:16.984337   17192 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:02:16.987626   17192 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:02:16.987655   17192 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:02:16.987664   17192 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:02:16.987671   17192 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1107 23:02:16.987680   17192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/addons for local assets ...
	I1107 23:02:16.987737   17192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/files for local assets ...
	I1107 23:02:16.987780   17192 start.go:303] post-start completed in 109.232602ms
	I1107 23:02:16.988080   17192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-890770
	I1107 23:02:17.004594   17192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/config.json ...
	I1107 23:02:17.004868   17192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:02:17.004906   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:17.021168   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:17.104435   17192 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:02:17.108537   17192 start.go:128] duration metric: createHost completed in 15.470488403s
	I1107 23:02:17.108563   17192 start.go:83] releasing machines lock for "addons-890770", held for 15.470678921s
	I1107 23:02:17.108628   17192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-890770
	I1107 23:02:17.125529   17192 ssh_runner.go:195] Run: cat /version.json
	I1107 23:02:17.125573   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:17.125578   17192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:02:17.125629   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:17.143155   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:17.143487   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:17.308189   17192 ssh_runner.go:195] Run: systemctl --version
	I1107 23:02:17.312215   17192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:02:17.449299   17192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:02:17.453516   17192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:02:17.471218   17192 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:02:17.471294   17192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:02:17.500851   17192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1107 23:02:17.500875   17192 start.go:472] detecting cgroup driver to use...
	I1107 23:02:17.500911   17192 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:02:17.500954   17192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:02:17.514830   17192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:02:17.525210   17192 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:02:17.525259   17192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:02:17.538133   17192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:02:17.550876   17192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:02:17.624623   17192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:02:17.704349   17192 docker.go:219] disabling docker service ...
	I1107 23:02:17.704423   17192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:02:17.722252   17192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:02:17.733500   17192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:02:17.815308   17192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:02:17.901033   17192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:02:17.911333   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:02:17.925971   17192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:02:17.926044   17192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:02:17.934892   17192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:02:17.935005   17192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:02:17.943677   17192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:02:17.952605   17192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:02:17.961477   17192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:02:17.969595   17192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:02:17.977214   17192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:02:17.984876   17192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:02:18.056435   17192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:02:18.164043   17192 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:02:18.164120   17192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:02:18.167502   17192 start.go:540] Will wait 60s for crictl version
	I1107 23:02:18.167553   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:02:18.170641   17192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:02:18.204563   17192 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1107 23:02:18.204683   17192 ssh_runner.go:195] Run: crio --version
	I1107 23:02:18.239488   17192 ssh_runner.go:195] Run: crio --version
	I1107 23:02:18.276448   17192 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1107 23:02:18.278121   17192 cli_runner.go:164] Run: docker network inspect addons-890770 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:02:18.293826   17192 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1107 23:02:18.297327   17192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:02:18.307189   17192 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:02:18.307248   17192 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:02:18.359536   17192 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:02:18.359561   17192 crio.go:415] Images already preloaded, skipping extraction
	I1107 23:02:18.359617   17192 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:02:18.390242   17192 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:02:18.390262   17192 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:02:18.390335   17192 ssh_runner.go:195] Run: crio config
	I1107 23:02:18.432659   17192 cni.go:84] Creating CNI manager for ""
	I1107 23:02:18.432680   17192 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:02:18.432697   17192 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:02:18.432715   17192 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-890770 NodeName:addons-890770 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:02:18.432828   17192 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-890770"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:02:18.432878   17192 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-890770 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-890770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:02:18.432927   17192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:02:18.440951   17192 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:02:18.441016   17192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:02:18.448599   17192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1107 23:02:18.465145   17192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:02:18.481225   17192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1107 23:02:18.497040   17192 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1107 23:02:18.500209   17192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:02:18.510374   17192 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770 for IP: 192.168.49.2
	I1107 23:02:18.510405   17192 certs.go:190] acquiring lock for shared ca certs: {Name:mkbe2c97e30f744ec2581d086567acaa8822f7ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:18.510529   17192 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key
	I1107 23:02:18.798375   17192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt ...
	I1107 23:02:18.798405   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt: {Name:mk41ddd67fd912f7c1e6293c0c8cb4e869279263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:18.798598   17192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key ...
	I1107 23:02:18.798611   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key: {Name:mk75f2be2813825392fb50ecfabc31cca8cbf185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:18.798713   17192 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key
	I1107 23:02:18.868434   17192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.crt ...
	I1107 23:02:18.868470   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.crt: {Name:mk353a9301bb8cba3aeb39e741032f0bafde15f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:18.868686   17192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key ...
	I1107 23:02:18.868704   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key: {Name:mkf0ebb193c2da56f11b5057324e123c2600651b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:18.868845   17192 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.key
	I1107 23:02:18.868863   17192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt with IP's: []
	I1107 23:02:18.951648   17192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt ...
	I1107 23:02:18.951678   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: {Name:mka5ba1890f857d2142d5113d109b8af654a959c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:18.951884   17192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.key ...
	I1107 23:02:18.951900   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.key: {Name:mk40cd562cd3958e50202938e90fcaa51d6cf59e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:18.951998   17192 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.key.dd3b5fb2
	I1107 23:02:18.952016   17192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 23:02:19.367619   17192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.crt.dd3b5fb2 ...
	I1107 23:02:19.367651   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.crt.dd3b5fb2: {Name:mk606cb6b52aa05861dfceb6a2c3cbd9aa7db5e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:19.367888   17192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.key.dd3b5fb2 ...
	I1107 23:02:19.367906   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.key.dd3b5fb2: {Name:mk8649c8b6ff286a36fa025489b9f62e326071f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:19.368001   17192 certs.go:337] copying /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.crt
	I1107 23:02:19.368073   17192 certs.go:341] copying /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.key
	I1107 23:02:19.368116   17192 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/proxy-client.key
	I1107 23:02:19.368131   17192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/proxy-client.crt with IP's: []
	I1107 23:02:19.464825   17192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/proxy-client.crt ...
	I1107 23:02:19.464856   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/proxy-client.crt: {Name:mk71fe3c8297a500010d47137dd5d62ef867cc6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:19.465042   17192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/proxy-client.key ...
	I1107 23:02:19.465057   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/proxy-client.key: {Name:mk01edf4c88b5ab03a7fef8733e267cf62fd0b49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:19.465247   17192 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem (1675 bytes)
	I1107 23:02:19.465288   17192 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:02:19.465311   17192 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:02:19.465332   17192 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem (1675 bytes)
	I1107 23:02:19.465914   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:02:19.487682   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 23:02:19.509390   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:02:19.530846   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 23:02:19.552031   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:02:19.573421   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 23:02:19.595147   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:02:19.616813   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:02:19.638358   17192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:02:19.659891   17192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:02:19.676759   17192 ssh_runner.go:195] Run: openssl version
	I1107 23:02:19.681917   17192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:02:19.690746   17192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:02:19.694130   17192 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:02:19.694192   17192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:02:19.700472   17192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:02:19.709262   17192 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:02:19.712557   17192 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:02:19.712607   17192 kubeadm.go:404] StartCluster: {Name:addons-890770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-890770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:02:19.712678   17192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:02:19.712743   17192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:02:19.746918   17192 cri.go:89] found id: ""
	I1107 23:02:19.746975   17192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:02:19.755475   17192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:02:19.763724   17192 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1107 23:02:19.763805   17192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:02:19.772302   17192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:02:19.772357   17192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 23:02:19.817703   17192 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1107 23:02:19.818104   17192 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 23:02:19.855192   17192 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:02:19.855279   17192 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1107 23:02:19.855330   17192 kubeadm.go:322] OS: Linux
	I1107 23:02:19.855419   17192 kubeadm.go:322] CGROUPS_CPU: enabled
	I1107 23:02:19.855474   17192 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1107 23:02:19.855512   17192 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1107 23:02:19.855565   17192 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1107 23:02:19.855669   17192 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1107 23:02:19.855742   17192 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1107 23:02:19.855839   17192 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1107 23:02:19.855910   17192 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1107 23:02:19.856035   17192 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1107 23:02:19.919663   17192 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:02:19.919821   17192 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:02:19.919899   17192 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:02:20.116157   17192 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:02:20.118734   17192 out.go:204]   - Generating certificates and keys ...
	I1107 23:02:20.118876   17192 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 23:02:20.118954   17192 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 23:02:20.177925   17192 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:02:20.308279   17192 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:02:20.417442   17192 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 23:02:20.502701   17192 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 23:02:20.630331   17192 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 23:02:20.630494   17192 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-890770 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:02:20.753609   17192 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 23:02:20.753778   17192 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-890770 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:02:20.808659   17192 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:02:20.906639   17192 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:02:21.042672   17192 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 23:02:21.042784   17192 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:02:21.168832   17192 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:02:21.429099   17192 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:02:21.712148   17192 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:02:21.806624   17192 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:02:21.807125   17192 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:02:21.809391   17192 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:02:21.812042   17192 out.go:204]   - Booting up control plane ...
	I1107 23:02:21.812223   17192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:02:21.812348   17192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:02:21.812429   17192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:02:21.820366   17192 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:02:21.821112   17192 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:02:21.821191   17192 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 23:02:21.897079   17192 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:02:26.900064   17192 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.003128 seconds
	I1107 23:02:26.900239   17192 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:02:26.911172   17192 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:02:27.429928   17192 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:02:27.430127   17192 kubeadm.go:322] [mark-control-plane] Marking the node addons-890770 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 23:02:27.939656   17192 kubeadm.go:322] [bootstrap-token] Using token: xua6yr.wztr6z7n816ybdph
	I1107 23:02:27.941344   17192 out.go:204]   - Configuring RBAC rules ...
	I1107 23:02:27.941457   17192 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:02:27.945213   17192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:02:27.951414   17192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:02:27.954353   17192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:02:27.957018   17192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:02:27.961113   17192 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:02:27.971406   17192 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:02:28.192624   17192 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1107 23:02:28.387854   17192 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1107 23:02:28.388971   17192 kubeadm.go:322] 
	I1107 23:02:28.389052   17192 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1107 23:02:28.389094   17192 kubeadm.go:322] 
	I1107 23:02:28.389221   17192 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1107 23:02:28.389245   17192 kubeadm.go:322] 
	I1107 23:02:28.389282   17192 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1107 23:02:28.389371   17192 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:02:28.389461   17192 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:02:28.389473   17192 kubeadm.go:322] 
	I1107 23:02:28.389544   17192 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1107 23:02:28.389553   17192 kubeadm.go:322] 
	I1107 23:02:28.389612   17192 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 23:02:28.389626   17192 kubeadm.go:322] 
	I1107 23:02:28.389683   17192 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1107 23:02:28.389770   17192 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:02:28.389859   17192 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:02:28.389869   17192 kubeadm.go:322] 
	I1107 23:02:28.389982   17192 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:02:28.390080   17192 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1107 23:02:28.390092   17192 kubeadm.go:322] 
	I1107 23:02:28.390193   17192 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xua6yr.wztr6z7n816ybdph \
	I1107 23:02:28.390327   17192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8a705d15a45dc72008d892e2cff618f0dc1a1c4f33be51e629a7afa6d45ac282 \
	I1107 23:02:28.390368   17192 kubeadm.go:322] 	--control-plane 
	I1107 23:02:28.390383   17192 kubeadm.go:322] 
	I1107 23:02:28.390493   17192 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:02:28.390511   17192 kubeadm.go:322] 
	I1107 23:02:28.390612   17192 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xua6yr.wztr6z7n816ybdph \
	I1107 23:02:28.390747   17192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8a705d15a45dc72008d892e2cff618f0dc1a1c4f33be51e629a7afa6d45ac282 
	I1107 23:02:28.392422   17192 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1107 23:02:28.392581   17192 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:02:28.392616   17192 cni.go:84] Creating CNI manager for ""
	I1107 23:02:28.392626   17192 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:02:28.394701   17192 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 23:02:28.396316   17192 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:02:28.401039   17192 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:02:28.401060   17192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:02:28.417102   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:02:29.034864   17192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:02:29.034931   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:29.034931   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=addons-890770 minikube.k8s.io/updated_at=2023_11_07T23_02_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:29.115449   17192 ops.go:34] apiserver oom_adj: -16
	I1107 23:02:29.115584   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:29.177548   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:29.742231   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:30.242326   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:30.742271   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:31.241923   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:31.742180   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:32.242344   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:32.741935   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:33.241824   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:33.741418   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:34.242148   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:34.742256   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:35.242406   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:35.741458   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:36.241794   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:36.742462   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:37.242033   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:37.742293   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:38.242300   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:38.742302   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:39.241734   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:39.742216   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:40.241816   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:40.741765   17192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:02:40.811607   17192 kubeadm.go:1081] duration metric: took 11.77673321s to wait for elevateKubeSystemPrivileges.
	I1107 23:02:40.811640   17192 kubeadm.go:406] StartCluster complete in 21.099039012s
	I1107 23:02:40.811670   17192 settings.go:142] acquiring lock: {Name:mke2e0b04eb18441805a33c4c4584e304f0bb176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:40.811824   17192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:02:40.812140   17192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/kubeconfig: {Name:mk2d252233a242c1461c7aa60d2f37a37a1be73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:02:40.812353   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:02:40.812360   17192 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1107 23:02:40.812447   17192 addons.go:69] Setting volumesnapshots=true in profile "addons-890770"
	I1107 23:02:40.812459   17192 addons.go:69] Setting cloud-spanner=true in profile "addons-890770"
	I1107 23:02:40.812477   17192 addons.go:231] Setting addon volumesnapshots=true in "addons-890770"
	I1107 23:02:40.812531   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.812540   17192 addons.go:231] Setting addon cloud-spanner=true in "addons-890770"
	I1107 23:02:40.812556   17192 addons.go:69] Setting metrics-server=true in profile "addons-890770"
	I1107 23:02:40.812569   17192 config.go:182] Loaded profile config "addons-890770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:02:40.812582   17192 addons.go:69] Setting ingress=true in profile "addons-890770"
	I1107 23:02:40.812583   17192 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-890770"
	I1107 23:02:40.812598   17192 addons.go:231] Setting addon ingress=true in "addons-890770"
	I1107 23:02:40.812602   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.812574   17192 addons.go:231] Setting addon metrics-server=true in "addons-890770"
	I1107 23:02:40.812641   17192 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-890770"
	I1107 23:02:40.812665   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.812673   17192 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-890770"
	I1107 23:02:40.812575   17192 addons.go:69] Setting default-storageclass=true in profile "addons-890770"
	I1107 23:02:40.812690   17192 addons.go:69] Setting storage-provisioner=true in profile "addons-890770"
	I1107 23:02:40.812693   17192 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-890770"
	I1107 23:02:40.812701   17192 addons.go:231] Setting addon storage-provisioner=true in "addons-890770"
	I1107 23:02:40.812705   17192 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-890770"
	I1107 23:02:40.812711   17192 addons.go:69] Setting inspektor-gadget=true in profile "addons-890770"
	I1107 23:02:40.812710   17192 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-890770"
	I1107 23:02:40.812774   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.812448   17192 addons.go:69] Setting helm-tiller=true in profile "addons-890770"
	I1107 23:02:40.812659   17192 addons.go:69] Setting registry=true in profile "addons-890770"
	I1107 23:02:40.812850   17192 addons.go:231] Setting addon registry=true in "addons-890770"
	I1107 23:02:40.812884   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.812644   17192 addons.go:69] Setting gcp-auth=true in profile "addons-890770"
	I1107 23:02:40.813080   17192 mustload.go:65] Loading cluster: addons-890770
	I1107 23:02:40.813130   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.813135   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.813139   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.813139   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.813280   17192 config.go:182] Loaded profile config "addons-890770": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:02:40.813292   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.812703   17192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-890770"
	I1107 23:02:40.813414   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.813509   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.813646   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.812661   17192 addons.go:69] Setting ingress-dns=true in profile "addons-890770"
	I1107 23:02:40.814173   17192 addons.go:231] Setting addon ingress-dns=true in "addons-890770"
	I1107 23:02:40.814245   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.814709   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.812677   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.812835   17192 addons.go:231] Setting addon helm-tiller=true in "addons-890770"
	I1107 23:02:40.814776   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.815202   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.812751   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.818075   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.818213   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.812732   17192 addons.go:231] Setting addon inspektor-gadget=true in "addons-890770"
	I1107 23:02:40.819038   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.812684   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.844538   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.846844   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.851909   17192 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1107 23:02:40.852569   17192 addons.go:231] Setting addon default-storageclass=true in "addons-890770"
	I1107 23:02:40.853783   17192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1107 23:02:40.853929   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1107 23:02:40.853999   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.854077   17192 out.go:177]   - Using image docker.io/registry:2.8.3
	I1107 23:02:40.856350   17192 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1107 23:02:40.855624   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.858011   17192 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1107 23:02:40.858031   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1107 23:02:40.858090   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.858590   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.861962   17192 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-890770"
	I1107 23:02:40.862019   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.866108   17192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1107 23:02:40.862563   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:40.871169   17192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:02:40.871482   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:40.874605   17192 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1107 23:02:40.876325   17192 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1107 23:02:40.876346   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1107 23:02:40.876405   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.878017   17192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:02:40.879736   17192 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1107 23:02:40.879758   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1107 23:02:40.879887   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.884400   17192 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1107 23:02:40.886083   17192 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1107 23:02:40.886126   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1107 23:02:40.886200   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.885407   17192 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-890770" context rescaled to 1 replicas
	I1107 23:02:40.886324   17192 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:02:40.887986   17192 out.go:177] * Verifying Kubernetes components...
	I1107 23:02:40.889881   17192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:02:40.894761   17192 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1107 23:02:40.896972   17192 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1107 23:02:40.896994   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1107 23:02:40.897055   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.901192   17192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:02:40.903860   17192 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:02:40.903883   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:02:40.903947   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.904417   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:40.913796   17192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1107 23:02:40.915666   17192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1107 23:02:40.917261   17192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1107 23:02:40.918922   17192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1107 23:02:40.917513   17192 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:02:40.923723   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:02:40.924171   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.924246   17192 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1107 23:02:40.926031   17192 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1107 23:02:40.926049   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1107 23:02:40.926092   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.925181   17192 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1107 23:02:40.925207   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:40.929531   17192 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1107 23:02:40.930754   17192 out.go:177]   - Using image docker.io/busybox:stable
	I1107 23:02:40.932190   17192 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1107 23:02:40.932208   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1107 23:02:40.932267   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.935534   17192 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1107 23:02:40.937538   17192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1107 23:02:40.939096   17192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1107 23:02:40.940733   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:40.942402   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:40.941919   17192 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1107 23:02:40.943172   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1107 23:02:40.943231   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.962068   17192 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1107 23:02:40.958890   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:40.964052   17192 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1107 23:02:40.964072   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1107 23:02:40.964134   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.964160   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:40.967040   17192 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1107 23:02:40.968521   17192 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1107 23:02:40.968542   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1107 23:02:40.968599   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:40.968700   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:40.969265   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:40.976527   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:40.978385   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:40.987112   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	W1107 23:02:40.997119   17192 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1107 23:02:40.997154   17192 retry.go:31] will retry after 275.178317ms: ssh: handshake failed: EOF
	I1107 23:02:40.997276   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:40.999830   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	W1107 23:02:41.004312   17192 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1107 23:02:41.004347   17192 retry.go:31] will retry after 183.326109ms: ssh: handshake failed: EOF
	I1107 23:02:41.101269   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 23:02:41.102262   17192 node_ready.go:35] waiting up to 6m0s for node "addons-890770" to be "Ready" ...
	I1107 23:02:41.195778   17192 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1107 23:02:41.195811   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1107 23:02:41.292708   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1107 23:02:41.296954   17192 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1107 23:02:41.297030   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1107 23:02:41.302882   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1107 23:02:41.403281   17192 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1107 23:02:41.403315   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1107 23:02:41.480452   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1107 23:02:41.481694   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:02:41.484855   17192 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1107 23:02:41.484926   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1107 23:02:41.488612   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1107 23:02:41.500466   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:02:41.581345   17192 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1107 23:02:41.581454   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1107 23:02:41.682181   17192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1107 23:02:41.682217   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1107 23:02:41.684649   17192 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1107 23:02:41.684724   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1107 23:02:41.703870   17192 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1107 23:02:41.703895   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1107 23:02:41.792336   17192 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1107 23:02:41.792423   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1107 23:02:41.801477   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1107 23:02:41.890729   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1107 23:02:41.980779   17192 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1107 23:02:41.980856   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1107 23:02:41.990750   17192 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1107 23:02:41.990841   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1107 23:02:41.995592   17192 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1107 23:02:41.995858   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1107 23:02:41.995820   17192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1107 23:02:41.996038   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1107 23:02:42.098561   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1107 23:02:42.190749   17192 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1107 23:02:42.190820   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1107 23:02:42.196697   17192 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1107 23:02:42.196783   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1107 23:02:42.384757   17192 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1107 23:02:42.384838   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1107 23:02:42.402364   17192 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1107 23:02:42.402390   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1107 23:02:42.582714   17192 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1107 23:02:42.582746   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1107 23:02:42.594140   17192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:02:42.594179   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1107 23:02:42.681343   17192 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1107 23:02:42.681773   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1107 23:02:42.891559   17192 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:02:42.891645   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1107 23:02:43.087807   17192 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1107 23:02:43.087886   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1107 23:02:43.187174   17192 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1107 23:02:43.187263   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1107 23:02:43.293575   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:02:43.301118   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:02:43.390402   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1107 23:02:43.396877   17192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1107 23:02:43.396955   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1107 23:02:43.782246   17192 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1107 23:02:43.782343   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1107 23:02:43.885893   17192 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.78456384s)
	I1107 23:02:43.886001   17192 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1107 23:02:43.981339   17192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1107 23:02:43.981436   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1107 23:02:44.180391   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1107 23:02:44.481778   17192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1107 23:02:44.481806   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1107 23:02:44.684775   17192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1107 23:02:44.684808   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1107 23:02:44.981693   17192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1107 23:02:44.981717   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1107 23:02:45.081936   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1107 23:02:45.301754   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:02:47.304768   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.011962016s)
	I1107 23:02:47.304802   17192 addons.go:467] Verifying addon ingress=true in "addons-890770"
	I1107 23:02:47.304855   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.001888424s)
	I1107 23:02:47.306894   17192 out.go:177] * Verifying ingress addon...
	I1107 23:02:47.304935   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.82317043s)
	I1107 23:02:47.304988   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.824457009s)
	I1107 23:02:47.305028   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.816338466s)
	I1107 23:02:47.305065   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.804563071s)
	I1107 23:02:47.305155   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.414337322s)
	I1107 23:02:47.308537   17192 addons.go:467] Verifying addon registry=true in "addons-890770"
	I1107 23:02:47.305193   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.206526659s)
	I1107 23:02:47.305262   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.004066664s)
	W1107 23:02:47.308711   17192 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1107 23:02:47.308737   17192 retry.go:31] will retry after 273.957194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1107 23:02:47.305333   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.914842046s)
	I1107 23:02:47.308768   17192 addons.go:467] Verifying addon metrics-server=true in "addons-890770"
	I1107 23:02:47.305412   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.124973774s)
	I1107 23:02:47.305113   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.503549742s)
	I1107 23:02:47.311430   17192 out.go:177] * Verifying registry addon...
	I1107 23:02:47.309211   17192 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1107 23:02:47.313971   17192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1107 23:02:47.318209   17192 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1107 23:02:47.318231   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:47.318528   17192 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1107 23:02:47.318548   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1107 23:02:47.318803   17192 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1107 23:02:47.323744   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:47.324159   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:47.583231   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1107 23:02:47.685435   17192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1107 23:02:47.685496   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:47.701847   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:02:47.705711   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:47.830832   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:47.831269   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:47.884298   17192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1107 23:02:47.904706   17192 addons.go:231] Setting addon gcp-auth=true in "addons-890770"
	I1107 23:02:47.904776   17192 host.go:66] Checking if "addons-890770" exists ...
	I1107 23:02:47.905278   17192 cli_runner.go:164] Run: docker container inspect addons-890770 --format={{.State.Status}}
	I1107 23:02:47.925002   17192 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1107 23:02:47.925058   17192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-890770
	I1107 23:02:47.944120   17192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/addons-890770/id_rsa Username:docker}
	I1107 23:02:48.215477   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.133487016s)
	I1107 23:02:48.215518   17192 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-890770"
	I1107 23:02:48.217275   17192 out.go:177] * Verifying csi-hostpath-driver addon...
	I1107 23:02:48.219619   17192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1107 23:02:48.223071   17192 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1107 23:02:48.223086   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:48.226470   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:48.327457   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:48.327730   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:48.674184   17192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.090912982s)
	I1107 23:02:48.676277   17192 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1107 23:02:48.677982   17192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1107 23:02:48.679594   17192 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1107 23:02:48.679615   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1107 23:02:48.696417   17192 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1107 23:02:48.696444   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1107 23:02:48.712811   17192 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1107 23:02:48.712839   17192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1107 23:02:48.728473   17192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1107 23:02:48.730477   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:48.829646   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:48.830146   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:49.126566   17192 addons.go:467] Verifying addon gcp-auth=true in "addons-890770"
	I1107 23:02:49.128728   17192 out.go:177] * Verifying gcp-auth addon...
	I1107 23:02:49.131236   17192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1107 23:02:49.185634   17192 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1107 23:02:49.185661   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:49.188953   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:49.232002   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:49.328406   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:49.328483   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:49.693542   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:49.784629   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:49.885262   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:49.886444   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:50.193346   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:50.202954   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:02:50.284173   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:50.384400   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:50.384648   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:50.692240   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:50.784178   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:50.882992   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:50.883160   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:51.193389   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:51.230717   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:51.383416   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:51.383600   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:51.692762   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:51.784005   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:51.828606   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:51.828874   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:52.192918   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:52.231621   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:52.328467   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:52.328756   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:52.692679   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:52.701291   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:02:52.731154   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:52.828853   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:52.829015   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:53.193199   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:53.231194   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:53.327756   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:53.328360   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:53.692824   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:53.733038   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:53.828909   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:53.829030   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:54.192153   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:54.231268   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:54.327351   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:54.328405   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:54.692237   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:54.733131   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:54.827523   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:54.827588   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:55.192815   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:55.201297   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:02:55.230823   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:55.328174   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:55.328671   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:55.692173   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:55.731069   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:55.828059   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:55.828190   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:56.192211   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:56.230980   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:56.328096   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:56.328360   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:56.691915   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:56.731293   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:56.827594   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:56.827816   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:57.192676   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:57.230660   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:57.328445   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:57.328742   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:57.692671   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:57.701076   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:02:57.730508   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:57.827445   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:57.827588   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:58.192847   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:58.230837   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:58.328181   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:58.328375   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:58.692414   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:58.730489   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:58.827859   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:58.828034   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:59.191899   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:59.231210   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:59.328577   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:59.328863   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:02:59.693068   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:02:59.701431   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:02:59.730925   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:02:59.828283   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:02:59.828554   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:00.192123   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:00.230659   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:00.328453   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:00.328656   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:00.692210   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:00.730942   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:00.827389   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:00.828167   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:01.192355   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:01.230717   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:01.328034   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:01.328248   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:01.692106   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:01.731124   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:01.828176   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:01.828271   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:02.192208   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:02.200622   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:03:02.231028   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:02.329354   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:02.329548   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:02.692847   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:02.730734   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:02.827928   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:02.828240   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:03.191788   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:03.230616   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:03.327995   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:03.328204   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:03.691965   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:03.730800   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:03.827906   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:03.828444   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:04.192973   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:04.201448   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:03:04.231089   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:04.328122   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:04.328358   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:04.691989   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:04.731001   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:04.828199   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:04.828378   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:05.192155   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:05.231473   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:05.327536   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:05.327885   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:05.692466   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:05.731389   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:05.828438   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:05.829430   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:06.192172   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:06.230743   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:06.327743   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:06.327938   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:06.692911   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:06.701699   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:03:06.730967   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:06.828273   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:06.828513   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:07.192453   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:07.230450   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:07.328133   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:07.328885   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:07.691800   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:07.730465   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:07.827815   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:07.827984   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:08.192937   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:08.230713   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:08.327814   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:08.328121   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:08.692621   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:08.730773   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:08.828312   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:08.828487   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:09.192194   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:09.200720   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:03:09.231016   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:09.329531   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:09.329833   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:09.692873   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:09.731197   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:09.827317   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:09.828354   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:10.192427   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:10.230406   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:10.327317   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:10.327517   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:10.692182   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:10.731005   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:10.827084   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:10.828011   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:11.192530   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:11.200963   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:03:11.230777   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:11.328167   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:11.328541   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:11.692208   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:11.730057   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:11.827412   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:11.828171   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:12.192244   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:12.231034   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:12.327238   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:12.328284   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:12.692295   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:12.730291   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:12.827788   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:12.827798   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:13.192715   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:13.201258   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:03:13.230551   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:13.327575   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:13.327751   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:13.692736   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:13.730699   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:13.827966   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:13.828068   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:14.191842   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:14.230378   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:14.327354   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:14.327727   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:14.692847   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:14.730998   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:14.828473   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:14.828674   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:15.192651   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:15.201589   17192 node_ready.go:58] node "addons-890770" has status "Ready":"False"
	I1107 23:03:15.232304   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:15.382426   17192 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1107 23:03:15.382442   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:15.382448   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:15.692440   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:15.700797   17192 node_ready.go:49] node "addons-890770" has status "Ready":"True"
	I1107 23:03:15.700818   17192 node_ready.go:38] duration metric: took 34.598529668s waiting for node "addons-890770" to be "Ready" ...
	I1107 23:03:15.700827   17192 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:03:15.709957   17192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-twnv4" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:15.731699   17192 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1107 23:03:15.731731   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:15.829981   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:15.830409   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:16.192824   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:16.235203   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:16.328532   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:16.329511   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:16.694121   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:16.728824   17192 pod_ready.go:92] pod "coredns-5dd5756b68-twnv4" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:16.728853   17192 pod_ready.go:81] duration metric: took 1.018867693s waiting for pod "coredns-5dd5756b68-twnv4" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:16.728883   17192 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-890770" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:16.732521   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:16.734362   17192 pod_ready.go:92] pod "etcd-addons-890770" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:16.734382   17192 pod_ready.go:81] duration metric: took 5.487709ms waiting for pod "etcd-addons-890770" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:16.734397   17192 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-890770" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:16.739142   17192 pod_ready.go:92] pod "kube-apiserver-addons-890770" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:16.739167   17192 pod_ready.go:81] duration metric: took 4.761404ms waiting for pod "kube-apiserver-addons-890770" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:16.739180   17192 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-890770" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:16.828739   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:16.828860   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:16.901308   17192 pod_ready.go:92] pod "kube-controller-manager-addons-890770" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:16.901335   17192 pod_ready.go:81] duration metric: took 162.146222ms waiting for pod "kube-controller-manager-addons-890770" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:16.901354   17192 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rrq5l" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:17.192713   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:17.232487   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:17.302944   17192 pod_ready.go:92] pod "kube-proxy-rrq5l" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:17.302974   17192 pod_ready.go:81] duration metric: took 401.599538ms waiting for pod "kube-proxy-rrq5l" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:17.302989   17192 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-890770" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:17.328939   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:17.329046   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:17.692854   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:17.703480   17192 pod_ready.go:92] pod "kube-scheduler-addons-890770" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:17.703512   17192 pod_ready.go:81] duration metric: took 400.506975ms waiting for pod "kube-scheduler-addons-890770" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:17.703526   17192 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:17.783442   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:17.883725   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:17.884041   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:18.192336   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:18.231904   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:18.328638   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:18.328684   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:18.693354   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:18.731987   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:18.828602   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:18.828678   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:19.192838   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:19.231741   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:19.328591   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:19.328740   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:19.692987   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:19.731519   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:19.827936   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:19.830798   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:20.006689   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:20.193102   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:20.232553   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:20.384264   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:20.385814   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:20.692343   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:20.785408   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:20.884232   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:20.884913   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:21.193223   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:21.232282   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:21.328098   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:21.328293   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:21.693100   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:21.731541   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:21.829228   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:21.829332   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:22.007469   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:22.192830   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:22.233152   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:22.327914   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:22.328703   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:22.692595   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:22.732260   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:22.828022   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:22.828533   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:23.193344   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:23.233255   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:23.328401   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:23.328401   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:23.692298   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:23.732755   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:23.829371   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:23.829373   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:24.008241   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:24.195141   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:24.232439   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:24.328386   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:24.328711   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:24.693503   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:24.732221   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:24.836391   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:24.836435   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:25.192724   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:25.231405   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:25.328529   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:25.328715   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:25.693542   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:25.731450   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:25.885900   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:25.886686   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:26.010018   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:26.193156   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:26.231861   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:26.329059   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:26.329212   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:26.693546   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:26.732564   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:26.828563   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:26.828675   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:27.193356   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:27.231789   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:27.327821   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:27.329056   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:27.692939   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:27.732844   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:27.829427   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:27.829525   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:28.193006   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:28.232549   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:28.328652   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:28.328805   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:28.507241   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:28.692102   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:28.739067   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:28.828290   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:28.828357   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:29.192494   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:29.232198   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:29.328508   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:29.328858   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:29.693186   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:29.731615   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:29.828430   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:29.828571   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:30.192884   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:30.231259   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:30.329227   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:30.329432   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:30.507441   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:30.693741   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:30.731665   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:30.827619   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:30.829368   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:31.193907   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:31.284313   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:31.383151   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:31.383246   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:31.692791   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:31.732912   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:31.828178   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:31.828253   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:32.192881   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:32.231479   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:32.329083   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:32.330472   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:32.692486   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:32.733547   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:32.829553   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:32.829692   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:33.009823   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:33.192342   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:33.232056   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:33.327861   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:33.328660   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:33.691982   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:33.731549   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:33.829234   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:33.829351   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:34.192046   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:34.231356   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:34.328606   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:34.328767   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:34.692185   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:34.731822   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:34.827356   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:34.828353   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:35.192123   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:35.231325   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:35.328605   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:35.328631   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:35.507177   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:35.691810   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:35.731489   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:35.828411   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:35.828437   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:36.192461   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:36.232383   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:36.330628   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:36.330692   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:36.692653   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:36.732840   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:36.828802   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:36.829055   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:37.193126   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:37.231345   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:37.328288   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:37.328490   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:37.692255   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:37.731936   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:37.827938   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:37.828123   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:38.006874   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:38.192801   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:38.232238   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:38.328010   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:38.328084   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:38.692553   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:38.731719   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:38.827616   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:38.827832   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:39.192268   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:39.231821   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:39.328396   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:39.328714   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:39.692681   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:39.732154   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:39.828050   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:39.828319   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:40.009330   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:40.192321   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:40.284934   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:40.328023   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:40.328317   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:40.692582   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:40.732064   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:40.827931   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:40.828007   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:41.193151   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:41.231219   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:41.328303   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:41.328304   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:41.692771   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:41.732024   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:41.828189   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:41.828311   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:42.193381   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:42.231691   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:42.328527   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:42.329339   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:42.506800   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:42.692748   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:42.731097   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:42.828086   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:42.828140   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:43.192604   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:43.232125   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:43.328014   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:43.328195   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:43.692302   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:43.731640   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:43.827417   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:43.828895   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:44.193422   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:44.289923   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:44.388654   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:44.389513   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:44.587567   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:44.694065   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:44.787048   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:44.886489   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:44.887253   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:45.193251   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:45.284487   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:45.328082   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:45.328385   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:45.693416   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:45.732343   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:45.828581   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:45.829262   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:46.193047   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:46.285263   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:46.328902   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:46.329004   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:46.693379   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:46.785042   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:46.883113   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:46.883936   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:47.007813   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:47.192563   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:47.231891   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:47.328489   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:47.328797   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:47.692443   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:47.732071   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:47.827668   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:47.829973   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:48.192996   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:48.232024   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:48.327505   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:48.332675   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:48.693324   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:48.731490   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:48.830215   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:48.830388   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:49.192293   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:49.231538   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:49.330218   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:49.330217   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:49.506788   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:49.693052   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:49.732117   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:49.828180   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:49.828288   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:50.192347   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:50.232354   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:50.328752   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:50.328832   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:50.692744   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:50.733063   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:50.827919   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:50.828015   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:51.193164   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:51.231600   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:51.327871   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:51.328829   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:51.508734   17192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"False"
	I1107 23:03:51.693174   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:51.731645   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:51.829359   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:51.829533   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:52.007936   17192 pod_ready.go:92] pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:52.007967   17192 pod_ready.go:81] duration metric: took 34.304433495s waiting for pod "metrics-server-7c66d45ddc-7ggdv" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:52.007980   17192 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zfkgl" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:52.012619   17192 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-zfkgl" in "kube-system" namespace has status "Ready":"True"
	I1107 23:03:52.012640   17192 pod_ready.go:81] duration metric: took 4.652957ms waiting for pod "nvidia-device-plugin-daemonset-zfkgl" in "kube-system" namespace to be "Ready" ...
	I1107 23:03:52.012659   17192 pod_ready.go:38] duration metric: took 36.311820668s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:03:52.012674   17192 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:03:52.012698   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1107 23:03:52.012742   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 23:03:52.047681   17192 cri.go:89] found id: "cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302"
	I1107 23:03:52.047700   17192 cri.go:89] found id: ""
	I1107 23:03:52.047708   17192 logs.go:284] 1 containers: [cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302]
	I1107 23:03:52.047773   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:52.051129   17192 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1107 23:03:52.051195   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 23:03:52.084773   17192 cri.go:89] found id: "b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852"
	I1107 23:03:52.084795   17192 cri.go:89] found id: ""
	I1107 23:03:52.084835   17192 logs.go:284] 1 containers: [b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852]
	I1107 23:03:52.084889   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:52.088176   17192 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1107 23:03:52.088235   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 23:03:52.122173   17192 cri.go:89] found id: "1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e"
	I1107 23:03:52.122201   17192 cri.go:89] found id: ""
	I1107 23:03:52.122222   17192 logs.go:284] 1 containers: [1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e]
	I1107 23:03:52.122275   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:52.125746   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1107 23:03:52.125834   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 23:03:52.158596   17192 cri.go:89] found id: "755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838"
	I1107 23:03:52.158619   17192 cri.go:89] found id: ""
	I1107 23:03:52.158628   17192 logs.go:284] 1 containers: [755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838]
	I1107 23:03:52.158684   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:52.161930   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1107 23:03:52.161988   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 23:03:52.193331   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:52.195111   17192 cri.go:89] found id: "43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01"
	I1107 23:03:52.195132   17192 cri.go:89] found id: ""
	I1107 23:03:52.195144   17192 logs.go:284] 1 containers: [43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01]
	I1107 23:03:52.195197   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:52.198669   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 23:03:52.198726   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 23:03:52.231593   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:52.233922   17192 cri.go:89] found id: "fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2"
	I1107 23:03:52.233945   17192 cri.go:89] found id: ""
	I1107 23:03:52.233955   17192 logs.go:284] 1 containers: [fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2]
	I1107 23:03:52.234010   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:52.237389   17192 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1107 23:03:52.237467   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1107 23:03:52.270555   17192 cri.go:89] found id: "b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87"
	I1107 23:03:52.270581   17192 cri.go:89] found id: ""
	I1107 23:03:52.270592   17192 logs.go:284] 1 containers: [b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87]
	I1107 23:03:52.270658   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:52.273981   17192 logs.go:123] Gathering logs for coredns [1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e] ...
	I1107 23:03:52.274010   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e"
	I1107 23:03:52.323282   17192 logs.go:123] Gathering logs for kube-proxy [43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01] ...
	I1107 23:03:52.323322   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01"
	I1107 23:03:52.329403   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:52.329730   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:52.399497   17192 logs.go:123] Gathering logs for kindnet [b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87] ...
	I1107 23:03:52.399536   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87"
	I1107 23:03:52.486245   17192 logs.go:123] Gathering logs for kubelet ...
	I1107 23:03:52.486282   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 23:03:52.563882   17192 logs.go:123] Gathering logs for dmesg ...
	I1107 23:03:52.563919   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 23:03:52.588378   17192 logs.go:123] Gathering logs for describe nodes ...
	I1107 23:03:52.588423   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1107 23:03:52.692336   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:52.732492   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:52.735871   17192 logs.go:123] Gathering logs for kube-apiserver [cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302] ...
	I1107 23:03:52.735897   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302"
	I1107 23:03:52.830628   17192 logs.go:123] Gathering logs for container status ...
	I1107 23:03:52.830684   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 23:03:52.831656   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:52.831802   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:52.923526   17192 logs.go:123] Gathering logs for etcd [b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852] ...
	I1107 23:03:52.923555   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852"
	I1107 23:03:52.969859   17192 logs.go:123] Gathering logs for kube-scheduler [755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838] ...
	I1107 23:03:52.969894   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838"
	I1107 23:03:53.023872   17192 logs.go:123] Gathering logs for kube-controller-manager [fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2] ...
	I1107 23:03:53.023919   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2"
	I1107 23:03:53.124302   17192 logs.go:123] Gathering logs for CRI-O ...
	I1107 23:03:53.124345   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1107 23:03:53.192789   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:53.232946   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:53.327983   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:53.328595   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:53.692554   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:53.731733   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:53.827659   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:53.828124   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:54.192011   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:54.231270   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:54.328231   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:54.328251   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:54.692071   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:54.731052   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:54.827993   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:54.828118   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:55.193066   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:55.231331   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:55.328390   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:55.328438   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:55.692472   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:55.699355   17192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:03:55.712868   17192 api_server.go:72] duration metric: took 1m14.826512681s to wait for apiserver process to appear ...
	I1107 23:03:55.712895   17192 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:03:55.712931   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1107 23:03:55.712980   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 23:03:55.733049   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:55.752689   17192 cri.go:89] found id: "cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302"
	I1107 23:03:55.752708   17192 cri.go:89] found id: ""
	I1107 23:03:55.752716   17192 logs.go:284] 1 containers: [cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302]
	I1107 23:03:55.752760   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:55.756898   17192 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1107 23:03:55.756962   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 23:03:55.820782   17192 cri.go:89] found id: "b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852"
	I1107 23:03:55.820813   17192 cri.go:89] found id: ""
	I1107 23:03:55.820823   17192 logs.go:284] 1 containers: [b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852]
	I1107 23:03:55.820941   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:55.880411   17192 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1107 23:03:55.880479   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 23:03:55.883346   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:55.884108   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:55.918715   17192 cri.go:89] found id: "1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e"
	I1107 23:03:55.918741   17192 cri.go:89] found id: ""
	I1107 23:03:55.918752   17192 logs.go:284] 1 containers: [1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e]
	I1107 23:03:55.918807   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:55.922691   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1107 23:03:55.922756   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 23:03:56.023012   17192 cri.go:89] found id: "755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838"
	I1107 23:03:56.023039   17192 cri.go:89] found id: ""
	I1107 23:03:56.023050   17192 logs.go:284] 1 containers: [755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838]
	I1107 23:03:56.023100   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:56.027093   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1107 23:03:56.027166   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 23:03:56.120342   17192 cri.go:89] found id: "43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01"
	I1107 23:03:56.120368   17192 cri.go:89] found id: ""
	I1107 23:03:56.120376   17192 logs.go:284] 1 containers: [43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01]
	I1107 23:03:56.120427   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:56.124319   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 23:03:56.124395   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 23:03:56.192690   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:56.281229   17192 cri.go:89] found id: "fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2"
	I1107 23:03:56.281255   17192 cri.go:89] found id: ""
	I1107 23:03:56.281265   17192 logs.go:284] 1 containers: [fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2]
	I1107 23:03:56.281340   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:56.284715   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:56.285651   17192 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1107 23:03:56.285710   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1107 23:03:56.382803   17192 cri.go:89] found id: "b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87"
	I1107 23:03:56.382884   17192 cri.go:89] found id: ""
	I1107 23:03:56.382896   17192 logs.go:284] 1 containers: [b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87]
	I1107 23:03:56.382958   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:03:56.384420   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:56.384583   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:56.386657   17192 logs.go:123] Gathering logs for kube-proxy [43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01] ...
	I1107 23:03:56.386686   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01"
	I1107 23:03:56.490796   17192 logs.go:123] Gathering logs for kube-controller-manager [fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2] ...
	I1107 23:03:56.490837   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2"
	I1107 23:03:56.635807   17192 logs.go:123] Gathering logs for kindnet [b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87] ...
	I1107 23:03:56.635854   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87"
	I1107 23:03:56.692830   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:56.783502   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:56.787496   17192 logs.go:123] Gathering logs for dmesg ...
	I1107 23:03:56.787531   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 23:03:56.801180   17192 logs.go:123] Gathering logs for kube-apiserver [cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302] ...
	I1107 23:03:56.801211   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302"
	I1107 23:03:56.884890   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:56.885646   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:56.914054   17192 logs.go:123] Gathering logs for coredns [1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e] ...
	I1107 23:03:56.914101   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e"
	I1107 23:03:57.085513   17192 logs.go:123] Gathering logs for kube-scheduler [755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838] ...
	I1107 23:03:57.085542   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838"
	I1107 23:03:57.192163   17192 logs.go:123] Gathering logs for CRI-O ...
	I1107 23:03:57.192197   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1107 23:03:57.193008   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:57.232024   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:57.277222   17192 logs.go:123] Gathering logs for container status ...
	I1107 23:03:57.277261   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 23:03:57.327819   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:57.329482   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:57.403893   17192 logs.go:123] Gathering logs for kubelet ...
	I1107 23:03:57.403930   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 23:03:57.556049   17192 logs.go:123] Gathering logs for describe nodes ...
	I1107 23:03:57.556086   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1107 23:03:57.693195   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:57.785201   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:57.787843   17192 logs.go:123] Gathering logs for etcd [b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852] ...
	I1107 23:03:57.787873   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852"
	I1107 23:03:57.828597   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:57.828852   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:58.194065   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:58.232528   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:58.328605   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:58.328743   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:58.692842   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:58.731562   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:58.829028   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:58.829144   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:59.192101   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:59.231533   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:59.329766   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:03:59.330267   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:59.692602   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:03:59.732746   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:03:59.828579   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:03:59.829527   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1107 23:04:00.192349   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:00.231594   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:00.327646   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:00.329415   17192 kapi.go:107] duration metric: took 1m13.015444045s to wait for kubernetes.io/minikube-addons=registry ...
	I1107 23:04:00.349927   17192 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1107 23:04:00.355591   17192 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1107 23:04:00.356626   17192 api_server.go:141] control plane version: v1.28.3
	I1107 23:04:00.356649   17192 api_server.go:131] duration metric: took 4.643748168s to wait for apiserver health ...
	I1107 23:04:00.356657   17192 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:04:00.356676   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1107 23:04:00.356718   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1107 23:04:00.390644   17192 cri.go:89] found id: "cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302"
	I1107 23:04:00.390667   17192 cri.go:89] found id: ""
	I1107 23:04:00.390676   17192 logs.go:284] 1 containers: [cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302]
	I1107 23:04:00.390731   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:04:00.394051   17192 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1107 23:04:00.394112   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1107 23:04:00.427834   17192 cri.go:89] found id: "b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852"
	I1107 23:04:00.427861   17192 cri.go:89] found id: ""
	I1107 23:04:00.427869   17192 logs.go:284] 1 containers: [b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852]
	I1107 23:04:00.427919   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:04:00.431035   17192 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1107 23:04:00.431122   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1107 23:04:00.464041   17192 cri.go:89] found id: "1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e"
	I1107 23:04:00.464067   17192 cri.go:89] found id: ""
	I1107 23:04:00.464076   17192 logs.go:284] 1 containers: [1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e]
	I1107 23:04:00.464128   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:04:00.467414   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1107 23:04:00.467467   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1107 23:04:00.501140   17192 cri.go:89] found id: "755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838"
	I1107 23:04:00.501172   17192 cri.go:89] found id: ""
	I1107 23:04:00.501183   17192 logs.go:284] 1 containers: [755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838]
	I1107 23:04:00.501235   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:04:00.505244   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1107 23:04:00.505297   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1107 23:04:00.539071   17192 cri.go:89] found id: "43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01"
	I1107 23:04:00.539094   17192 cri.go:89] found id: ""
	I1107 23:04:00.539103   17192 logs.go:284] 1 containers: [43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01]
	I1107 23:04:00.539157   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:04:00.542605   17192 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1107 23:04:00.542670   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1107 23:04:00.587586   17192 cri.go:89] found id: "fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2"
	I1107 23:04:00.587615   17192 cri.go:89] found id: ""
	I1107 23:04:00.587625   17192 logs.go:284] 1 containers: [fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2]
	I1107 23:04:00.587678   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:04:00.592334   17192 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1107 23:04:00.592400   17192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1107 23:04:00.692757   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:00.708985   17192 cri.go:89] found id: "b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87"
	I1107 23:04:00.709015   17192 cri.go:89] found id: ""
	I1107 23:04:00.709026   17192 logs.go:284] 1 containers: [b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87]
	I1107 23:04:00.709080   17192 ssh_runner.go:195] Run: which crictl
	I1107 23:04:00.785469   17192 logs.go:123] Gathering logs for kube-scheduler [755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838] ...
	I1107 23:04:00.785497   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838"
	I1107 23:04:00.788825   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:00.884035   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:01.093652   17192 logs.go:123] Gathering logs for kube-proxy [43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01] ...
	I1107 23:04:01.093743   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01"
	I1107 23:04:01.193483   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:01.288586   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:01.386363   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:01.393472   17192 logs.go:123] Gathering logs for kube-controller-manager [fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2] ...
	I1107 23:04:01.393503   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2"
	I1107 23:04:01.693322   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:01.712397   17192 logs.go:123] Gathering logs for kindnet [b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87] ...
	I1107 23:04:01.712432   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87"
	I1107 23:04:01.787044   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:01.885110   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:01.990902   17192 logs.go:123] Gathering logs for CRI-O ...
	I1107 23:04:01.990946   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1107 23:04:02.150578   17192 logs.go:123] Gathering logs for container status ...
	I1107 23:04:02.150616   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1107 23:04:02.193247   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:02.288749   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:02.385223   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:02.399449   17192 logs.go:123] Gathering logs for kubelet ...
	I1107 23:04:02.399542   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 23:04:02.554953   17192 logs.go:123] Gathering logs for dmesg ...
	I1107 23:04:02.554995   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 23:04:02.685215   17192 logs.go:123] Gathering logs for describe nodes ...
	I1107 23:04:02.685300   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1107 23:04:02.693891   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:02.785025   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:02.884056   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:03.015661   17192 logs.go:123] Gathering logs for kube-apiserver [cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302] ...
	I1107 23:04:03.015704   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302"
	I1107 23:04:03.114535   17192 logs.go:123] Gathering logs for etcd [b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852] ...
	I1107 23:04:03.114579   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852"
	I1107 23:04:03.194104   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:03.229035   17192 logs.go:123] Gathering logs for coredns [1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e] ...
	I1107 23:04:03.229071   17192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e"
	I1107 23:04:03.284983   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:03.328803   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:03.692468   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:03.732625   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:03.829064   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:04.192124   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:04.231843   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:04.327795   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:04.692372   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:04.731521   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:04.829075   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:05.192542   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:05.232744   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:05.328559   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:05.693225   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:05.731692   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:05.827911   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:05.829285   17192 system_pods.go:59] 19 kube-system pods found
	I1107 23:04:05.829325   17192 system_pods.go:61] "coredns-5dd5756b68-twnv4" [6b0997c6-4ac7-4f9c-b269-8442dfa6ccfc] Running
	I1107 23:04:05.829339   17192 system_pods.go:61] "csi-hostpath-attacher-0" [918f9f8a-0225-48d2-b3eb-86854ba8abaa] Running
	I1107 23:04:05.829350   17192 system_pods.go:61] "csi-hostpath-resizer-0" [a8222792-bfa5-4b24-9757-8ca023f55cdc] Running
	I1107 23:04:05.829366   17192 system_pods.go:61] "csi-hostpathplugin-tqdhq" [293def48-4ab8-4734-9ed9-63d6691c0413] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:04:05.829377   17192 system_pods.go:61] "etcd-addons-890770" [78e7cce4-2e15-43e0-bd3e-741f473b57ab] Running
	I1107 23:04:05.829389   17192 system_pods.go:61] "kindnet-ngghx" [c9838805-0061-4bda-8193-524d8faba7fe] Running
	I1107 23:04:05.829400   17192 system_pods.go:61] "kube-apiserver-addons-890770" [f7205d1f-43b7-4572-8fcd-e4d1650d7ae0] Running
	I1107 23:04:05.829410   17192 system_pods.go:61] "kube-controller-manager-addons-890770" [f71e1435-ddd5-4db7-9832-ea17c8a1df88] Running
	I1107 23:04:05.829421   17192 system_pods.go:61] "kube-ingress-dns-minikube" [5ba69908-3e0e-4a80-93f6-33319cf6052e] Running
	I1107 23:04:05.829428   17192 system_pods.go:61] "kube-proxy-rrq5l" [996a1db3-7f7c-4da8-977f-546a4d4687c0] Running
	I1107 23:04:05.829439   17192 system_pods.go:61] "kube-scheduler-addons-890770" [fc1460b9-906e-421e-b60b-2a709ce585fe] Running
	I1107 23:04:05.829450   17192 system_pods.go:61] "metrics-server-7c66d45ddc-7ggdv" [2b50a7aa-9578-4b46-a1fc-223b5c78a661] Running
	I1107 23:04:05.829460   17192 system_pods.go:61] "nvidia-device-plugin-daemonset-zfkgl" [2205ac04-9181-4a44-a293-1022552e9e82] Running
	I1107 23:04:05.829471   17192 system_pods.go:61] "registry-9wd9z" [4d059836-b855-4bb6-b803-c0168e7c81ac] Running
	I1107 23:04:05.829481   17192 system_pods.go:61] "registry-proxy-jgztn" [5a51acc2-8ab5-4ee4-bf7b-ba4efd67d0bf] Running
	I1107 23:04:05.829491   17192 system_pods.go:61] "snapshot-controller-58dbcc7b99-6m46s" [90ce4aef-0cac-4294-8b49-43239e0d0f21] Running
	I1107 23:04:05.829501   17192 system_pods.go:61] "snapshot-controller-58dbcc7b99-d58wg" [cfc14b1a-aa78-4b58-8267-ac639a381d1f] Running
	I1107 23:04:05.829512   17192 system_pods.go:61] "storage-provisioner" [ee234d52-fa1a-41a0-a1ba-f4ad0c013f40] Running
	I1107 23:04:05.829522   17192 system_pods.go:61] "tiller-deploy-7b677967b9-gwrk4" [0df13ec3-b6cb-4894-b86a-d25f8f3bc106] Running
	I1107 23:04:05.829535   17192 system_pods.go:74] duration metric: took 5.472870214s to wait for pod list to return data ...
	I1107 23:04:05.829548   17192 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:04:05.831414   17192 default_sa.go:45] found service account: "default"
	I1107 23:04:05.831432   17192 default_sa.go:55] duration metric: took 1.873584ms for default service account to be created ...
	I1107 23:04:05.831441   17192 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:04:05.840873   17192 system_pods.go:86] 19 kube-system pods found
	I1107 23:04:05.840906   17192 system_pods.go:89] "coredns-5dd5756b68-twnv4" [6b0997c6-4ac7-4f9c-b269-8442dfa6ccfc] Running
	I1107 23:04:05.840914   17192 system_pods.go:89] "csi-hostpath-attacher-0" [918f9f8a-0225-48d2-b3eb-86854ba8abaa] Running
	I1107 23:04:05.840921   17192 system_pods.go:89] "csi-hostpath-resizer-0" [a8222792-bfa5-4b24-9757-8ca023f55cdc] Running
	I1107 23:04:05.840936   17192 system_pods.go:89] "csi-hostpathplugin-tqdhq" [293def48-4ab8-4734-9ed9-63d6691c0413] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1107 23:04:05.840943   17192 system_pods.go:89] "etcd-addons-890770" [78e7cce4-2e15-43e0-bd3e-741f473b57ab] Running
	I1107 23:04:05.840956   17192 system_pods.go:89] "kindnet-ngghx" [c9838805-0061-4bda-8193-524d8faba7fe] Running
	I1107 23:04:05.840963   17192 system_pods.go:89] "kube-apiserver-addons-890770" [f7205d1f-43b7-4572-8fcd-e4d1650d7ae0] Running
	I1107 23:04:05.840974   17192 system_pods.go:89] "kube-controller-manager-addons-890770" [f71e1435-ddd5-4db7-9832-ea17c8a1df88] Running
	I1107 23:04:05.840982   17192 system_pods.go:89] "kube-ingress-dns-minikube" [5ba69908-3e0e-4a80-93f6-33319cf6052e] Running
	I1107 23:04:05.840992   17192 system_pods.go:89] "kube-proxy-rrq5l" [996a1db3-7f7c-4da8-977f-546a4d4687c0] Running
	I1107 23:04:05.841001   17192 system_pods.go:89] "kube-scheduler-addons-890770" [fc1460b9-906e-421e-b60b-2a709ce585fe] Running
	I1107 23:04:05.841011   17192 system_pods.go:89] "metrics-server-7c66d45ddc-7ggdv" [2b50a7aa-9578-4b46-a1fc-223b5c78a661] Running
	I1107 23:04:05.841020   17192 system_pods.go:89] "nvidia-device-plugin-daemonset-zfkgl" [2205ac04-9181-4a44-a293-1022552e9e82] Running
	I1107 23:04:05.841028   17192 system_pods.go:89] "registry-9wd9z" [4d059836-b855-4bb6-b803-c0168e7c81ac] Running
	I1107 23:04:05.841035   17192 system_pods.go:89] "registry-proxy-jgztn" [5a51acc2-8ab5-4ee4-bf7b-ba4efd67d0bf] Running
	I1107 23:04:05.841042   17192 system_pods.go:89] "snapshot-controller-58dbcc7b99-6m46s" [90ce4aef-0cac-4294-8b49-43239e0d0f21] Running
	I1107 23:04:05.841052   17192 system_pods.go:89] "snapshot-controller-58dbcc7b99-d58wg" [cfc14b1a-aa78-4b58-8267-ac639a381d1f] Running
	I1107 23:04:05.841058   17192 system_pods.go:89] "storage-provisioner" [ee234d52-fa1a-41a0-a1ba-f4ad0c013f40] Running
	I1107 23:04:05.841065   17192 system_pods.go:89] "tiller-deploy-7b677967b9-gwrk4" [0df13ec3-b6cb-4894-b86a-d25f8f3bc106] Running
	I1107 23:04:05.841076   17192 system_pods.go:126] duration metric: took 9.628777ms to wait for k8s-apps to be running ...
	I1107 23:04:05.841091   17192 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:04:05.841140   17192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:04:05.852777   17192 system_svc.go:56] duration metric: took 11.68082ms WaitForService to wait for kubelet.
	I1107 23:04:05.852801   17192 kubeadm.go:581] duration metric: took 1m24.966452483s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:04:05.852832   17192 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:04:05.855653   17192 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 23:04:05.855675   17192 node_conditions.go:123] node cpu capacity is 8
	I1107 23:04:05.855685   17192 node_conditions.go:105] duration metric: took 2.848695ms to run NodePressure ...
	I1107 23:04:05.855695   17192 start.go:228] waiting for startup goroutines ...
	I1107 23:04:06.193402   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:06.231962   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:06.327441   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:06.692886   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:06.731000   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:06.827659   17192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1107 23:04:07.192317   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:07.231579   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:07.327076   17192 kapi.go:107] duration metric: took 1m20.017864974s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1107 23:04:07.692687   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:07.732123   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:08.193242   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:08.231408   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:08.693532   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:08.732294   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:09.192978   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:09.232271   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:09.692694   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1107 23:04:09.732214   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:10.192766   17192 kapi.go:107] duration metric: took 1m21.061521836s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1107 23:04:10.195089   17192 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-890770 cluster.
	I1107 23:04:10.196914   17192 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1107 23:04:10.198482   17192 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1107 23:04:10.233679   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:10.730961   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:11.231677   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:11.731599   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:12.231903   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:12.732162   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:13.231019   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:13.731155   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:14.232571   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:14.731317   17192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1107 23:04:15.231345   17192 kapi.go:107] duration metric: took 1m27.011725919s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1107 23:04:15.233510   17192 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, helm-tiller, metrics-server, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1107 23:04:15.236243   17192 addons.go:502] enable addons completed in 1m34.423883499s: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns helm-tiller metrics-server inspektor-gadget storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1107 23:04:15.236289   17192 start.go:233] waiting for cluster config update ...
	I1107 23:04:15.236316   17192 start.go:242] writing updated cluster config ...
	I1107 23:04:15.236570   17192 ssh_runner.go:195] Run: rm -f paused
	I1107 23:04:15.283297   17192 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1107 23:04:15.285376   17192 out.go:177] * Done! kubectl is now configured to use "addons-890770" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 07 23:07:13 addons-890770 crio[949]: time="2023-11-07 23:07:13.487296443Z" level=info msg="Removing container: ddf306b86c4b99ffc073ee39a72b64fdd47a8ee277a4a5b7a325c5f95976dc0f" id=670c1d9b-8e14-4aae-a63b-6ba5986e2604 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 07 23:07:13 addons-890770 crio[949]: time="2023-11-07 23:07:13.505876581Z" level=info msg="Removed container ddf306b86c4b99ffc073ee39a72b64fdd47a8ee277a4a5b7a325c5f95976dc0f: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=670c1d9b-8e14-4aae-a63b-6ba5986e2604 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 07 23:07:14 addons-890770 crio[949]: time="2023-11-07 23:07:14.531909961Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7" id=713ae930-9cf9-46d0-94e3-ba1ed38ad243 name=/runtime.v1.ImageService/PullImage
	Nov 07 23:07:14 addons-890770 crio[949]: time="2023-11-07 23:07:14.532671661Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=e12f4abc-e46f-4b05-80d4-50291a2b0687 name=/runtime.v1.ImageService/ImageStatus
	Nov 07 23:07:14 addons-890770 crio[949]: time="2023-11-07 23:07:14.533614802Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=e12f4abc-e46f-4b05-80d4-50291a2b0687 name=/runtime.v1.ImageService/ImageStatus
	Nov 07 23:07:14 addons-890770 crio[949]: time="2023-11-07 23:07:14.534409953Z" level=info msg="Creating container: default/hello-world-app-5d77478584-4ndcs/hello-world-app" id=4de0f313-2895-4bfa-b70d-0eb5b4ebc66c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 07 23:07:14 addons-890770 crio[949]: time="2023-11-07 23:07:14.534484530Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 07 23:07:14 addons-890770 crio[949]: time="2023-11-07 23:07:14.606415441Z" level=info msg="Created container 356ff37ead1ed19ad4145d38fd65b77e5c2cf38e6065d4a903c5b9e4f67bd36c: default/hello-world-app-5d77478584-4ndcs/hello-world-app" id=4de0f313-2895-4bfa-b70d-0eb5b4ebc66c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 07 23:07:14 addons-890770 crio[949]: time="2023-11-07 23:07:14.606971566Z" level=info msg="Starting container: 356ff37ead1ed19ad4145d38fd65b77e5c2cf38e6065d4a903c5b9e4f67bd36c" id=57cf6d1c-1fe4-4d05-94f2-c8ffefff5621 name=/runtime.v1.RuntimeService/StartContainer
	Nov 07 23:07:14 addons-890770 crio[949]: time="2023-11-07 23:07:14.615423957Z" level=info msg="Started container" PID=11331 containerID=356ff37ead1ed19ad4145d38fd65b77e5c2cf38e6065d4a903c5b9e4f67bd36c description=default/hello-world-app-5d77478584-4ndcs/hello-world-app id=57cf6d1c-1fe4-4d05-94f2-c8ffefff5621 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dcf65ba393830309502c77bd6eed3b26157a5de963c6134e77c2bf0737037f4c
	Nov 07 23:07:15 addons-890770 crio[949]: time="2023-11-07 23:07:15.103393637Z" level=info msg="Stopping container: bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b (timeout: 2s)" id=0972a867-570f-4703-ba02-3701bb3614a4 name=/runtime.v1.RuntimeService/StopContainer
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.111716508Z" level=warning msg="Stopping container bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=0972a867-570f-4703-ba02-3701bb3614a4 name=/runtime.v1.RuntimeService/StopContainer
	Nov 07 23:07:17 addons-890770 conmon[6661]: conmon bfc260ff984e5dee754a <ninfo>: container 6673 exited with status 137
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.257387677Z" level=info msg="Stopped container bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b: ingress-nginx/ingress-nginx-controller-7c6974c4d8-ckgx2/controller" id=0972a867-570f-4703-ba02-3701bb3614a4 name=/runtime.v1.RuntimeService/StopContainer
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.257915108Z" level=info msg="Stopping pod sandbox: 5769ef8782e9689c208f302414599814485c3b087006791a7b6076d54e937470" id=e55d4410-5493-4885-a231-35f4cedfe44d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.260836126Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-N3QOCZ2CQ3QGOZXO - [0:0]\n:KUBE-HP-ENNKT7QW4UYYD5JZ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-N3QOCZ2CQ3QGOZXO\n-X KUBE-HP-ENNKT7QW4UYYD5JZ\nCOMMIT\n"
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.262247466Z" level=info msg="Closing host port tcp:80"
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.262287206Z" level=info msg="Closing host port tcp:443"
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.263618799Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.263640518Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.263802117Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-ckgx2 Namespace:ingress-nginx ID:5769ef8782e9689c208f302414599814485c3b087006791a7b6076d54e937470 UID:d6ec01f6-357b-4e77-b472-0b3d7320b3ce NetNS:/var/run/netns/d792aa43-8e41-41fd-977c-a2f8465c6583 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.263925043Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-ckgx2 from CNI network \"kindnet\" (type=ptp)"
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.301219458Z" level=info msg="Stopped pod sandbox: 5769ef8782e9689c208f302414599814485c3b087006791a7b6076d54e937470" id=e55d4410-5493-4885-a231-35f4cedfe44d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.497681699Z" level=info msg="Removing container: bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b" id=c696269f-9444-41af-9e5a-e8a88b8220ea name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 07 23:07:17 addons-890770 crio[949]: time="2023-11-07 23:07:17.512141135Z" level=info msg="Removed container bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b: ingress-nginx/ingress-nginx-controller-7c6974c4d8-ckgx2/controller" id=c696269f-9444-41af-9e5a-e8a88b8220ea name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	356ff37ead1ed       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   dcf65ba393830       hello-world-app-5d77478584-4ndcs
	d088a14176501       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                              2 minutes ago       Running             nginx                     0                   74c7b97aab372       nginx
	476cddfa1165f       ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4                        2 minutes ago       Running             headlamp                  0                   fceec812c7dc6       headlamp-94b766c-h789z
	62ecd8c1c4b0b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   042b56b78e3a8       gcp-auth-d4c87556c-htp4x
	96d98da1722f0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   8d35db82464a6       ingress-nginx-admission-patch-qplp4
	5c39ebb3baa48       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   b61171212efd9       ingress-nginx-admission-create-p6dp6
	1e2f21e601692       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   d956d608649e6       coredns-5dd5756b68-twnv4
	89fa989e2db7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   052539f976a06       storage-provisioner
	43f45ede8d7a8       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                             4 minutes ago       Running             kube-proxy                0                   f8dc246bf6d71       kube-proxy-rrq5l
	b49c462a77535       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   138598408e642       kindnet-ngghx
	cb742bdeef0b3       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                             4 minutes ago       Running             kube-apiserver            0                   26aea54bce760       kube-apiserver-addons-890770
	fb91fbbb57411       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                             4 minutes ago       Running             kube-controller-manager   0                   a126b9e8a37fe       kube-controller-manager-addons-890770
	b6a07896d338b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   caca4696ee096       etcd-addons-890770
	755540014df11       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                             4 minutes ago       Running             kube-scheduler            0                   baf8f6386e80f       kube-scheduler-addons-890770
	
	* 
	* ==> coredns [1e2f21e601692fd2e645131e2921713ccc5739d91b4a925896228f7c39bb1a4e] <==
	* [INFO] 10.244.0.16:47751 - 34992 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113456s
	[INFO] 10.244.0.16:36738 - 8726 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.006515348s
	[INFO] 10.244.0.16:36738 - 3869 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.009247901s
	[INFO] 10.244.0.16:58834 - 50578 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004805719s
	[INFO] 10.244.0.16:58834 - 65174 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.009751179s
	[INFO] 10.244.0.16:55207 - 21440 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00450157s
	[INFO] 10.244.0.16:55207 - 39887 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00457171s
	[INFO] 10.244.0.16:46867 - 14654 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011004s
	[INFO] 10.244.0.16:46867 - 47164 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134364s
	[INFO] 10.244.0.20:44642 - 31188 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000236686s
	[INFO] 10.244.0.20:34522 - 28625 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000308044s
	[INFO] 10.244.0.20:48425 - 23056 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000094415s
	[INFO] 10.244.0.20:34466 - 12116 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131752s
	[INFO] 10.244.0.20:52664 - 39076 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099549s
	[INFO] 10.244.0.20:42077 - 29942 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000081208s
	[INFO] 10.244.0.20:45590 - 6751 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.006599039s
	[INFO] 10.244.0.20:45994 - 57072 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007993893s
	[INFO] 10.244.0.20:42316 - 46402 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005907675s
	[INFO] 10.244.0.20:56332 - 8058 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005943321s
	[INFO] 10.244.0.20:59915 - 11515 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005160128s
	[INFO] 10.244.0.20:51083 - 6066 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005264734s
	[INFO] 10.244.0.20:55000 - 26177 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 78 0.000662151s
	[INFO] 10.244.0.20:36383 - 38503 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.000752552s
	[INFO] 10.244.0.23:41906 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000119082s
	[INFO] 10.244.0.23:39160 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082151s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-890770
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-890770
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=addons-890770
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_02_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-890770
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-890770
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:07:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:05:01 +0000   Tue, 07 Nov 2023 23:02:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:05:01 +0000   Tue, 07 Nov 2023 23:02:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:05:01 +0000   Tue, 07 Nov 2023 23:02:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:05:01 +0000   Tue, 07 Nov 2023 23:03:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-890770
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 949401fc37a94c1e8e6c27984fd301bf
	  System UUID:                7dd79b5d-563c-4248-a400-f42b1ff54ce2
	  Boot ID:                    c97cc438-dd92-4788-91bf-3e8db350d4d3
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-4ndcs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-d4c87556c-htp4x                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  headlamp                    headlamp-94b766c-h789z                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 coredns-5dd5756b68-twnv4                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m41s
	  kube-system                 etcd-addons-890770                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 kindnet-ngghx                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m41s
	  kube-system                 kube-apiserver-addons-890770             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-controller-manager-addons-890770    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-proxy-rrq5l                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-scheduler-addons-890770             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 4m36s            kube-proxy       
	  Normal  Starting                 5m               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m (x8 over 5m)  kubelet          Node addons-890770 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m (x8 over 5m)  kubelet          Node addons-890770 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m (x8 over 5m)  kubelet          Node addons-890770 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m54s            kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m54s            kubelet          Node addons-890770 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s            kubelet          Node addons-890770 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s            kubelet          Node addons-890770 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m42s            node-controller  Node addons-890770 event: Registered Node addons-890770 in Controller
	  Normal  NodeReady                4m7s             kubelet          Node addons-890770 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.008434] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003623] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000803] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001066] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000720] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000714] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000867] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001353] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001437] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.993108] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 7 23:05] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe 74 f7 7e f9 5c ce 6f 32 1b 79 79 08 00
	[  +1.019627] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fe 74 f7 7e f9 5c ce 6f 32 1b 79 79 08 00
	[  +2.011847] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe 74 f7 7e f9 5c ce 6f 32 1b 79 79 08 00
	[  +4.131677] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe 74 f7 7e f9 5c ce 6f 32 1b 79 79 08 00
	[  +8.187432] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe 74 f7 7e f9 5c ce 6f 32 1b 79 79 08 00
	[ +16.126871] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fe 74 f7 7e f9 5c ce 6f 32 1b 79 79 08 00
	[Nov 7 23:06] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fe 74 f7 7e f9 5c ce 6f 32 1b 79 79 08 00
	
	* 
	* ==> etcd [b6a07896d338b626905900faeca9dd8389c41c66b5e41dfb6d5ad6efa07e1852] <==
	* {"level":"info","ts":"2023-11-07T23:02:42.593444Z","caller":"traceutil/trace.go:171","msg":"trace[1607523217] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"297.087413ms","start":"2023-11-07T23:02:42.296348Z","end":"2023-11-07T23:02:42.593435Z","steps":["trace[1607523217] 'process raft request'  (duration: 286.370437ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:02:42.593603Z","caller":"traceutil/trace.go:171","msg":"trace[40401591] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"296.311263ms","start":"2023-11-07T23:02:42.297283Z","end":"2023-11-07T23:02:42.593594Z","steps":["trace[40401591] 'process raft request'  (duration: 295.17612ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:02:42.987725Z","caller":"traceutil/trace.go:171","msg":"trace[307353339] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"103.154785ms","start":"2023-11-07T23:02:42.884552Z","end":"2023-11-07T23:02:42.987706Z","steps":["trace[307353339] 'process raft request'  (duration: 103.033593ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:02:43.19053Z","caller":"traceutil/trace.go:171","msg":"trace[968725945] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"102.802995ms","start":"2023-11-07T23:02:43.08771Z","end":"2023-11-07T23:02:43.190513Z","steps":["trace[968725945] 'process raft request'  (duration: 102.69349ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:02:43.68581Z","caller":"traceutil/trace.go:171","msg":"trace[1201239293] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"189.689815ms","start":"2023-11-07T23:02:43.496076Z","end":"2023-11-07T23:02:43.685766Z","steps":["trace[1201239293] 'process raft request'  (duration: 189.480112ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:02:43.781553Z","caller":"traceutil/trace.go:171","msg":"trace[110987452] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"285.134302ms","start":"2023-11-07T23:02:43.496399Z","end":"2023-11-07T23:02:43.781533Z","steps":["trace[110987452] 'process raft request'  (duration: 284.631831ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:02:43.781958Z","caller":"traceutil/trace.go:171","msg":"trace[181479889] linearizableReadLoop","detail":"{readStateIndex:420; appliedIndex:418; }","duration":"196.132125ms","start":"2023-11-07T23:02:43.585812Z","end":"2023-11-07T23:02:43.781945Z","steps":["trace[181479889] 'read index received'  (duration: 99.895607ms)","trace[181479889] 'applied index is now lower than readState.Index'  (duration: 96.235624ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-07T23:02:43.782131Z","caller":"traceutil/trace.go:171","msg":"trace[608001108] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"188.44843ms","start":"2023-11-07T23:02:43.593672Z","end":"2023-11-07T23:02:43.78212Z","steps":["trace[608001108] 'process raft request'  (duration: 187.797402ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-07T23:02:43.782429Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.625685ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-07T23:02:43.782528Z","caller":"traceutil/trace.go:171","msg":"trace[1843891495] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:410; }","duration":"196.735546ms","start":"2023-11-07T23:02:43.585782Z","end":"2023-11-07T23:02:43.782518Z","steps":["trace[1843891495] 'agreement among raft nodes before linearized reading'  (duration: 196.563134ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:02:44.488877Z","caller":"traceutil/trace.go:171","msg":"trace[927271767] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"108.579606ms","start":"2023-11-07T23:02:44.380267Z","end":"2023-11-07T23:02:44.488846Z","steps":["trace[927271767] 'process raft request'  (duration: 13.652458ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:02:44.490186Z","caller":"traceutil/trace.go:171","msg":"trace[1250787021] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"109.941569ms","start":"2023-11-07T23:02:44.380229Z","end":"2023-11-07T23:02:44.49017Z","steps":["trace[1250787021] 'process raft request'  (duration: 13.362183ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:02:44.698385Z","caller":"traceutil/trace.go:171","msg":"trace[1689768071] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"104.251777ms","start":"2023-11-07T23:02:44.594116Z","end":"2023-11-07T23:02:44.698368Z","steps":["trace[1689768071] 'process raft request'  (duration: 88.971911ms)","trace[1689768071] 'compare'  (duration: 14.597446ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-07T23:02:44.698526Z","caller":"traceutil/trace.go:171","msg":"trace[32821259] transaction","detail":"{read_only:false; response_revision:423; number_of_response:1; }","duration":"104.031579ms","start":"2023-11-07T23:02:44.594487Z","end":"2023-11-07T23:02:44.698518Z","steps":["trace[32821259] 'process raft request'  (duration: 103.544871ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:02:44.69861Z","caller":"traceutil/trace.go:171","msg":"trace[1643214965] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"103.415055ms","start":"2023-11-07T23:02:44.595189Z","end":"2023-11-07T23:02:44.698604Z","steps":["trace[1643214965] 'process raft request'  (duration: 102.884226ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:02:44.698686Z","caller":"traceutil/trace.go:171","msg":"trace[1009643677] linearizableReadLoop","detail":"{readStateIndex:433; appliedIndex:431; }","duration":"104.371348ms","start":"2023-11-07T23:02:44.594306Z","end":"2023-11-07T23:02:44.698678Z","steps":["trace[1009643677] 'read index received'  (duration: 85.708345ms)","trace[1009643677] 'applied index is now lower than readState.Index'  (duration: 18.66147ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-07T23:02:44.698856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.551552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-07T23:02:44.698882Z","caller":"traceutil/trace.go:171","msg":"trace[238227994] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:424; }","duration":"104.589458ms","start":"2023-11-07T23:02:44.594283Z","end":"2023-11-07T23:02:44.698872Z","steps":["trace[238227994] 'agreement among raft nodes before linearized reading'  (duration: 104.529057ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:03:57.097389Z","caller":"traceutil/trace.go:171","msg":"trace[1084254734] linearizableReadLoop","detail":"{readStateIndex:1129; appliedIndex:1128; }","duration":"100.154582ms","start":"2023-11-07T23:03:56.997215Z","end":"2023-11-07T23:03:57.09737Z","steps":["trace[1084254734] 'read index received'  (duration: 100.068846ms)","trace[1084254734] 'applied index is now lower than readState.Index'  (duration: 84.696µs)"],"step_count":2}
	{"level":"warn","ts":"2023-11-07T23:03:57.09747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.260201ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:1 size:2444"}
	{"level":"info","ts":"2023-11-07T23:03:57.097498Z","caller":"traceutil/trace.go:171","msg":"trace[1398602558] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:1093; }","duration":"100.303344ms","start":"2023-11-07T23:03:56.997187Z","end":"2023-11-07T23:03:57.09749Z","steps":["trace[1398602558] 'agreement among raft nodes before linearized reading'  (duration: 100.224013ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-07T23:03:57.123969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.405633ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-07T23:03:57.124026Z","caller":"traceutil/trace.go:171","msg":"trace[95691115] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:1094; }","duration":"118.47646ms","start":"2023-11-07T23:03:57.005536Z","end":"2023-11-07T23:03:57.124013Z","steps":["trace[95691115] 'agreement among raft nodes before linearized reading'  (duration: 118.334312ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:03:57.12425Z","caller":"traceutil/trace.go:171","msg":"trace[1784011346] transaction","detail":"{read_only:false; response_revision:1094; number_of_response:1; }","duration":"126.948221ms","start":"2023-11-07T23:03:56.997294Z","end":"2023-11-07T23:03:57.124242Z","steps":["trace[1784011346] 'process raft request'  (duration: 126.408225ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-07T23:05:17.351322Z","caller":"traceutil/trace.go:171","msg":"trace[2063201591] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1672; }","duration":"114.349308ms","start":"2023-11-07T23:05:17.236956Z","end":"2023-11-07T23:05:17.351305Z","steps":["trace[2063201591] 'process raft request'  (duration: 69.95487ms)","trace[2063201591] 'compare'  (duration: 44.315767ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [62ecd8c1c4b0bab4e4d6de005a4e4bc528d24efdeae86f2d5db482e5696a30a9] <==
	* 2023/11/07 23:04:09 GCP Auth Webhook started!
	2023/11/07 23:04:16 Ready to marshal response ...
	2023/11/07 23:04:16 Ready to write response ...
	2023/11/07 23:04:16 Ready to marshal response ...
	2023/11/07 23:04:16 Ready to write response ...
	2023/11/07 23:04:25 Ready to marshal response ...
	2023/11/07 23:04:25 Ready to write response ...
	2023/11/07 23:04:27 Ready to marshal response ...
	2023/11/07 23:04:27 Ready to write response ...
	2023/11/07 23:04:31 Ready to marshal response ...
	2023/11/07 23:04:31 Ready to write response ...
	2023/11/07 23:04:32 Ready to marshal response ...
	2023/11/07 23:04:32 Ready to write response ...
	2023/11/07 23:04:32 Ready to marshal response ...
	2023/11/07 23:04:32 Ready to write response ...
	2023/11/07 23:04:32 Ready to marshal response ...
	2023/11/07 23:04:32 Ready to write response ...
	2023/11/07 23:04:40 Ready to marshal response ...
	2023/11/07 23:04:40 Ready to write response ...
	2023/11/07 23:04:49 Ready to marshal response ...
	2023/11/07 23:04:49 Ready to write response ...
	2023/11/07 23:05:06 Ready to marshal response ...
	2023/11/07 23:05:06 Ready to write response ...
	2023/11/07 23:07:12 Ready to marshal response ...
	2023/11/07 23:07:12 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:07:22 up 49 min,  0 users,  load average: 0.27, 0.55, 0.30
	Linux addons-890770 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [b49c462a7753561d3555a22df6d802a9e428da976fd1da46c84c3a78e681de87] <==
	* I1107 23:05:14.951197       1 main.go:227] handling current node
	I1107 23:05:24.955469       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:05:24.955501       1 main.go:227] handling current node
	I1107 23:05:34.967574       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:05:34.967595       1 main.go:227] handling current node
	I1107 23:05:44.974833       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:05:44.974854       1 main.go:227] handling current node
	I1107 23:05:54.980417       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:05:54.980440       1 main.go:227] handling current node
	I1107 23:06:04.984384       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:06:04.984411       1 main.go:227] handling current node
	I1107 23:06:14.990403       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:06:14.990429       1 main.go:227] handling current node
	I1107 23:06:24.994489       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:06:24.994512       1 main.go:227] handling current node
	I1107 23:06:35.002786       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:06:35.002814       1 main.go:227] handling current node
	I1107 23:06:45.006646       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:06:45.006670       1 main.go:227] handling current node
	I1107 23:06:55.010152       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:06:55.010176       1 main.go:227] handling current node
	I1107 23:07:05.013221       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:07:05.013242       1 main.go:227] handling current node
	I1107 23:07:15.026149       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:07:15.026182       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [cb742bdeef0b3facc6b5e16cb76b88e2505ed3642ee184306860f26a123ea302] <==
	* W1107 23:04:44.885299       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1107 23:04:49.699724       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1107 23:04:49.936319       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.89.128"}
	I1107 23:04:53.600284       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1107 23:04:57.998017       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1107 23:05:23.923880       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:05:23.924050       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:05:23.930763       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:05:23.930823       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:05:23.937709       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:05:23.937765       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:05:23.939734       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:05:23.939810       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:05:23.948950       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:05:23.949097       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:05:23.950572       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:05:23.950617       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:05:23.962525       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:05:23.962583       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1107 23:05:23.986014       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1107 23:05:23.986157       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1107 23:05:24.940649       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1107 23:05:24.985991       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1107 23:05:24.990915       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1107 23:07:12.160886       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.208.255"}
	
	* 
	* ==> kube-controller-manager [fb91fbbb5741106a4e332046153acbb21b77d6a4e5242ff7aec9df7b251f97b2] <==
	* W1107 23:05:59.979473       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:05:59.979501       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:06:04.534418       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:06:04.534456       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:06:33.568582       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:06:33.568614       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:06:40.877268       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:06:40.877303       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:06:45.931128       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:06:45.931156       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:06:51.819620       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:06:51.819649       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1107 23:07:06.482550       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1107 23:07:06.482578       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1107 23:07:12.000537       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1107 23:07:12.013050       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-4ndcs"
	I1107 23:07:12.020363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.028469ms"
	I1107 23:07:12.026661       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.156058ms"
	I1107 23:07:12.026795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.099µs"
	I1107 23:07:12.036864       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="81.701µs"
	I1107 23:07:14.083933       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1107 23:07:14.088678       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1107 23:07:14.091038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="4.961µs"
	I1107 23:07:15.505704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.845432ms"
	I1107 23:07:15.505796       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="52.894µs"
	
	* 
	* ==> kube-proxy [43f45ede8d7a8188a6f2026a090be8a8030449ff001e3db29e7660eec383ad01] <==
	* I1107 23:02:45.187484       1 server_others.go:69] "Using iptables proxy"
	I1107 23:02:45.286797       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1107 23:02:45.899015       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1107 23:02:45.903235       1 server_others.go:152] "Using iptables Proxier"
	I1107 23:02:45.903338       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1107 23:02:45.903378       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1107 23:02:45.903425       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1107 23:02:45.903663       1 server.go:846] "Version info" version="v1.28.3"
	I1107 23:02:45.983754       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:02:45.984842       1 config.go:188] "Starting service config controller"
	I1107 23:02:45.986227       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1107 23:02:45.985503       1 config.go:97] "Starting endpoint slice config controller"
	I1107 23:02:45.986336       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1107 23:02:45.985742       1 config.go:315] "Starting node config controller"
	I1107 23:02:45.986383       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1107 23:02:46.087678       1 shared_informer.go:318] Caches are synced for service config
	I1107 23:02:46.087704       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1107 23:02:46.088309       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [755540014df110d07bd1ff341b45322bc18f37b3437a30f4f20f54638d976838] <==
	* W1107 23:02:25.502881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:02:25.502890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1107 23:02:25.502957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:02:25.502968       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1107 23:02:25.502969       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:02:25.502984       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1107 23:02:25.502987       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1107 23:02:25.502999       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:02:25.503004       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:02:25.503019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1107 23:02:25.503051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1107 23:02:25.503071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1107 23:02:25.503075       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1107 23:02:25.503083       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 23:02:25.503085       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:02:25.503087       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:02:25.503095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:02:25.503097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1107 23:02:26.410073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:02:26.410104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1107 23:02:26.478530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:02:26.478569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 23:02:26.584439       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:02:26.584477       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1107 23:02:26.999620       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 07 23:07:12 addons-890770 kubelet[1556]: I1107 23:07:12.021056    1556 memory_manager.go:346] "RemoveStaleState removing state" podUID="90ce4aef-0cac-4294-8b49-43239e0d0f21" containerName="volume-snapshot-controller"
	Nov 07 23:07:12 addons-890770 kubelet[1556]: I1107 23:07:12.132472    1556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/13c9cb77-bd9c-4a22-9de6-37062a14e6f2-gcp-creds\") pod \"hello-world-app-5d77478584-4ndcs\" (UID: \"13c9cb77-bd9c-4a22-9de6-37062a14e6f2\") " pod="default/hello-world-app-5d77478584-4ndcs"
	Nov 07 23:07:12 addons-890770 kubelet[1556]: I1107 23:07:12.132539    1556 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7nr\" (UniqueName: \"kubernetes.io/projected/13c9cb77-bd9c-4a22-9de6-37062a14e6f2-kube-api-access-fn7nr\") pod \"hello-world-app-5d77478584-4ndcs\" (UID: \"13c9cb77-bd9c-4a22-9de6-37062a14e6f2\") " pod="default/hello-world-app-5d77478584-4ndcs"
	Nov 07 23:07:12 addons-890770 kubelet[1556]: W1107 23:07:12.412586    1556 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/8157f8cfbc48a05567b045e64347713dcb1771dfe0057f8640598ef89f485011/crio-dcf65ba393830309502c77bd6eed3b26157a5de963c6134e77c2bf0737037f4c WatchSource:0}: Error finding container dcf65ba393830309502c77bd6eed3b26157a5de963c6134e77c2bf0737037f4c: Status 404 returned error can't find the container with id dcf65ba393830309502c77bd6eed3b26157a5de963c6134e77c2bf0737037f4c
	Nov 07 23:07:13 addons-890770 kubelet[1556]: I1107 23:07:13.139907    1556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j5bg\" (UniqueName: \"kubernetes.io/projected/5ba69908-3e0e-4a80-93f6-33319cf6052e-kube-api-access-2j5bg\") pod \"5ba69908-3e0e-4a80-93f6-33319cf6052e\" (UID: \"5ba69908-3e0e-4a80-93f6-33319cf6052e\") "
	Nov 07 23:07:13 addons-890770 kubelet[1556]: I1107 23:07:13.141738    1556 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba69908-3e0e-4a80-93f6-33319cf6052e-kube-api-access-2j5bg" (OuterVolumeSpecName: "kube-api-access-2j5bg") pod "5ba69908-3e0e-4a80-93f6-33319cf6052e" (UID: "5ba69908-3e0e-4a80-93f6-33319cf6052e"). InnerVolumeSpecName "kube-api-access-2j5bg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 07 23:07:13 addons-890770 kubelet[1556]: I1107 23:07:13.240614    1556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2j5bg\" (UniqueName: \"kubernetes.io/projected/5ba69908-3e0e-4a80-93f6-33319cf6052e-kube-api-access-2j5bg\") on node \"addons-890770\" DevicePath \"\""
	Nov 07 23:07:13 addons-890770 kubelet[1556]: I1107 23:07:13.486257    1556 scope.go:117] "RemoveContainer" containerID="ddf306b86c4b99ffc073ee39a72b64fdd47a8ee277a4a5b7a325c5f95976dc0f"
	Nov 07 23:07:13 addons-890770 kubelet[1556]: I1107 23:07:13.506161    1556 scope.go:117] "RemoveContainer" containerID="ddf306b86c4b99ffc073ee39a72b64fdd47a8ee277a4a5b7a325c5f95976dc0f"
	Nov 07 23:07:13 addons-890770 kubelet[1556]: E1107 23:07:13.506573    1556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddf306b86c4b99ffc073ee39a72b64fdd47a8ee277a4a5b7a325c5f95976dc0f\": container with ID starting with ddf306b86c4b99ffc073ee39a72b64fdd47a8ee277a4a5b7a325c5f95976dc0f not found: ID does not exist" containerID="ddf306b86c4b99ffc073ee39a72b64fdd47a8ee277a4a5b7a325c5f95976dc0f"
	Nov 07 23:07:13 addons-890770 kubelet[1556]: I1107 23:07:13.506615    1556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddf306b86c4b99ffc073ee39a72b64fdd47a8ee277a4a5b7a325c5f95976dc0f"} err="failed to get container status \"ddf306b86c4b99ffc073ee39a72b64fdd47a8ee277a4a5b7a325c5f95976dc0f\": rpc error: code = NotFound desc = could not find container \"ddf306b86c4b99ffc073ee39a72b64fdd47a8ee277a4a5b7a325c5f95976dc0f\": container with ID starting with ddf306b86c4b99ffc073ee39a72b64fdd47a8ee277a4a5b7a325c5f95976dc0f not found: ID does not exist"
	Nov 07 23:07:14 addons-890770 kubelet[1556]: I1107 23:07:14.284631    1556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="333d3758-0508-4e13-a33e-14cac8696299" path="/var/lib/kubelet/pods/333d3758-0508-4e13-a33e-14cac8696299/volumes"
	Nov 07 23:07:14 addons-890770 kubelet[1556]: I1107 23:07:14.285143    1556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5ba69908-3e0e-4a80-93f6-33319cf6052e" path="/var/lib/kubelet/pods/5ba69908-3e0e-4a80-93f6-33319cf6052e/volumes"
	Nov 07 23:07:14 addons-890770 kubelet[1556]: I1107 23:07:14.285560    1556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7a045d5a-8c1c-46f0-b445-f12138890cef" path="/var/lib/kubelet/pods/7a045d5a-8c1c-46f0-b445-f12138890cef/volumes"
	Nov 07 23:07:17 addons-890770 kubelet[1556]: I1107 23:07:17.401456    1556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glpwc\" (UniqueName: \"kubernetes.io/projected/d6ec01f6-357b-4e77-b472-0b3d7320b3ce-kube-api-access-glpwc\") pod \"d6ec01f6-357b-4e77-b472-0b3d7320b3ce\" (UID: \"d6ec01f6-357b-4e77-b472-0b3d7320b3ce\") "
	Nov 07 23:07:17 addons-890770 kubelet[1556]: I1107 23:07:17.401532    1556 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d6ec01f6-357b-4e77-b472-0b3d7320b3ce-webhook-cert\") pod \"d6ec01f6-357b-4e77-b472-0b3d7320b3ce\" (UID: \"d6ec01f6-357b-4e77-b472-0b3d7320b3ce\") "
	Nov 07 23:07:17 addons-890770 kubelet[1556]: I1107 23:07:17.403484    1556 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ec01f6-357b-4e77-b472-0b3d7320b3ce-kube-api-access-glpwc" (OuterVolumeSpecName: "kube-api-access-glpwc") pod "d6ec01f6-357b-4e77-b472-0b3d7320b3ce" (UID: "d6ec01f6-357b-4e77-b472-0b3d7320b3ce"). InnerVolumeSpecName "kube-api-access-glpwc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 07 23:07:17 addons-890770 kubelet[1556]: I1107 23:07:17.403734    1556 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ec01f6-357b-4e77-b472-0b3d7320b3ce-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d6ec01f6-357b-4e77-b472-0b3d7320b3ce" (UID: "d6ec01f6-357b-4e77-b472-0b3d7320b3ce"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:07:17 addons-890770 kubelet[1556]: I1107 23:07:17.496651    1556 scope.go:117] "RemoveContainer" containerID="bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b"
	Nov 07 23:07:17 addons-890770 kubelet[1556]: I1107 23:07:17.502014    1556 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-glpwc\" (UniqueName: \"kubernetes.io/projected/d6ec01f6-357b-4e77-b472-0b3d7320b3ce-kube-api-access-glpwc\") on node \"addons-890770\" DevicePath \"\""
	Nov 07 23:07:17 addons-890770 kubelet[1556]: I1107 23:07:17.502048    1556 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d6ec01f6-357b-4e77-b472-0b3d7320b3ce-webhook-cert\") on node \"addons-890770\" DevicePath \"\""
	Nov 07 23:07:17 addons-890770 kubelet[1556]: I1107 23:07:17.512387    1556 scope.go:117] "RemoveContainer" containerID="bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b"
	Nov 07 23:07:17 addons-890770 kubelet[1556]: E1107 23:07:17.512847    1556 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b\": container with ID starting with bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b not found: ID does not exist" containerID="bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b"
	Nov 07 23:07:17 addons-890770 kubelet[1556]: I1107 23:07:17.512894    1556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b"} err="failed to get container status \"bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b\": rpc error: code = NotFound desc = could not find container \"bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b\": container with ID starting with bfc260ff984e5dee754acf5941245ecb7f9e20e6f538df46212f91a467433b0b not found: ID does not exist"
	Nov 07 23:07:18 addons-890770 kubelet[1556]: I1107 23:07:18.284443    1556 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d6ec01f6-357b-4e77-b472-0b3d7320b3ce" path="/var/lib/kubelet/pods/d6ec01f6-357b-4e77-b472-0b3d7320b3ce/volumes"
	
	* 
	* ==> storage-provisioner [89fa989e2db7a7084af0b5a0539ede9cbe0899fa84556b39c23e568855c3e78e] <==
	* I1107 23:03:16.281684       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 23:03:16.292388       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 23:03:16.292771       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 23:03:16.301073       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 23:03:16.301160       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cdc77481-8eab-487c-a634-938ad569ce69", APIVersion:"v1", ResourceVersion:"890", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-890770_0ee779cb-257c-40d0-ba90-ac961380696a became leader
	I1107 23:03:16.301340       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-890770_0ee779cb-257c-40d0-ba90-ac961380696a!
	I1107 23:03:16.402077       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-890770_0ee779cb-257c-40d0-ba90-ac961380696a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-890770 -n addons-890770
helpers_test.go:261: (dbg) Run:  kubectl --context addons-890770 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.18s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (204.079892ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-773400 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.166910843s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-773400 image ls: (2.480881324s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-773400" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.65s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (182.77s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-124713 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-124713 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.884634737s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-124713 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-124713 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b2644845-0841-4e1f-9af3-fbb9a8c013c3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b2644845-0841-4e1f-9af3-fbb9a8c013c3] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.007827676s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-124713 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1107 23:14:15.302982   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:14:42.988970   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:15:53.400963   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:15:53.406226   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:15:53.416483   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:15:53.436792   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:15:53.477087   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:15:53.557412   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:15:53.717831   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:15:54.038403   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:15:54.679375   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:15:55.959917   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:15:58.520924   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:16:03.641378   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:16:13.882366   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-124713 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.276056536s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-124713 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-124713 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.006582904s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-124713 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-124713 addons disable ingress-dns --alsologtostderr -v=1: (2.469231493s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-124713 addons disable ingress --alsologtostderr -v=1
E1107 23:16:34.363111   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-124713 addons disable ingress --alsologtostderr -v=1: (7.424436833s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-124713
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-124713:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b5353b4127d8b3e7248d2268637a1ca66ac3e1e16da8599f3681b53489bb90e",
	        "Created": "2023-11-07T23:12:11.717070493Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 57672,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:12:12.022986559Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dbc648475405a75e8c472743ce721cb0b74db98d9501831a17a27a54e2bd3e47",
	        "ResolvConfPath": "/var/lib/docker/containers/4b5353b4127d8b3e7248d2268637a1ca66ac3e1e16da8599f3681b53489bb90e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b5353b4127d8b3e7248d2268637a1ca66ac3e1e16da8599f3681b53489bb90e/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b5353b4127d8b3e7248d2268637a1ca66ac3e1e16da8599f3681b53489bb90e/hosts",
	        "LogPath": "/var/lib/docker/containers/4b5353b4127d8b3e7248d2268637a1ca66ac3e1e16da8599f3681b53489bb90e/4b5353b4127d8b3e7248d2268637a1ca66ac3e1e16da8599f3681b53489bb90e-json.log",
	        "Name": "/ingress-addon-legacy-124713",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-124713:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-124713",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/aad714b533bdc4b958b96e075445d818b2203161001dc9dbacaecb24a693986a-init/diff:/var/lib/docker/overlay2/ae2a32444c6a9314aa09825baf7df8a89e3a23e782d3f3ba648a13de53e3f1b1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aad714b533bdc4b958b96e075445d818b2203161001dc9dbacaecb24a693986a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aad714b533bdc4b958b96e075445d818b2203161001dc9dbacaecb24a693986a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aad714b533bdc4b958b96e075445d818b2203161001dc9dbacaecb24a693986a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-124713",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-124713/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-124713",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-124713",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-124713",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "76172667f2e43226511c6c17b017021f91683eb7d654482fbb0d61823365c429",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/76172667f2e4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-124713": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4b5353b4127d",
	                        "ingress-addon-legacy-124713"
	                    ],
	                    "NetworkID": "4dd5ac17af5fbac7a1cc2b72fd9effa90e3b05331d65e45e020a6755fe0042b1",
	                    "EndpointID": "a343992d6480d5030e6828a212ca3f7aea144bf9ef7576690d066b6349bbc513",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-124713 -n ingress-addon-legacy-124713
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-124713 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-124713 logs -n 25: (1.092929408s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-773400 ssh findmnt        | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| ssh            | functional-773400 ssh findmnt        | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| mount          | -p functional-773400                 | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| tunnel         | functional-773400 tunnel             | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| tunnel         | functional-773400 tunnel             | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| service        | functional-773400 service            | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	|                | hello-node-connect --url             |                             |         |         |                     |                     |
	| tunnel         | functional-773400 tunnel             | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| update-context | functional-773400                    | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-773400                    | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-773400                    | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-773400                    | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-773400                    | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-773400 ssh pgrep          | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-773400 image build -t     | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	|                | localhost/my-image:functional-773400 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-773400                    | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-773400                    | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-773400 image ls           | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	| delete         | -p functional-773400                 | functional-773400           | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:11 UTC |
	| start          | -p ingress-addon-legacy-124713       | ingress-addon-legacy-124713 | jenkins | v1.32.0 | 07 Nov 23 23:11 UTC | 07 Nov 23 23:13 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-124713          | ingress-addon-legacy-124713 | jenkins | v1.32.0 | 07 Nov 23 23:13 UTC | 07 Nov 23 23:13 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-124713          | ingress-addon-legacy-124713 | jenkins | v1.32.0 | 07 Nov 23 23:13 UTC | 07 Nov 23 23:13 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-124713          | ingress-addon-legacy-124713 | jenkins | v1.32.0 | 07 Nov 23 23:14 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-124713 ip       | ingress-addon-legacy-124713 | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	| addons         | ingress-addon-legacy-124713          | ingress-addon-legacy-124713 | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-124713          | ingress-addon-legacy-124713 | jenkins | v1.32.0 | 07 Nov 23 23:16 UTC | 07 Nov 23 23:16 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:11:48
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:11:48.245879   57021 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:11:48.246137   57021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:11:48.246148   57021 out.go:309] Setting ErrFile to fd 2...
	I1107 23:11:48.246155   57021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:11:48.246379   57021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	I1107 23:11:48.246976   57021 out.go:303] Setting JSON to false
	I1107 23:11:48.248414   57021 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3258,"bootTime":1699395450,"procs":715,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:11:48.248473   57021 start.go:138] virtualization: kvm guest
	I1107 23:11:48.250895   57021 out.go:177] * [ingress-addon-legacy-124713] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:11:48.252461   57021 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:11:48.252480   57021 notify.go:220] Checking for updates...
	I1107 23:11:48.253994   57021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:11:48.255507   57021 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:11:48.257236   57021 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	I1107 23:11:48.258783   57021 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:11:48.260165   57021 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:11:48.261661   57021 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:11:48.285151   57021 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:11:48.285269   57021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:11:48.339468   57021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-11-07 23:11:48.331021416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:11:48.339572   57021 docker.go:295] overlay module found
	I1107 23:11:48.342542   57021 out.go:177] * Using the docker driver based on user configuration
	I1107 23:11:48.344081   57021 start.go:298] selected driver: docker
	I1107 23:11:48.344096   57021 start.go:902] validating driver "docker" against <nil>
	I1107 23:11:48.344107   57021 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:11:48.344891   57021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:11:48.395443   57021 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-11-07 23:11:48.387493678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:11:48.395578   57021 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:11:48.395800   57021 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:11:48.397647   57021 out.go:177] * Using Docker driver with root privileges
	I1107 23:11:48.399228   57021 cni.go:84] Creating CNI manager for ""
	I1107 23:11:48.399250   57021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:11:48.399270   57021 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:11:48.399287   57021 start_flags.go:323] config:
	{Name:ingress-addon-legacy-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-124713 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:11:48.400834   57021 out.go:177] * Starting control plane node ingress-addon-legacy-124713 in cluster ingress-addon-legacy-124713
	I1107 23:11:48.402177   57021 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:11:48.403535   57021 out.go:177] * Pulling base image ...
	I1107 23:11:48.404902   57021 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1107 23:11:48.404927   57021 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:11:48.421223   57021 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 23:11:48.421248   57021 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 23:11:48.509033   57021 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1107 23:11:48.509067   57021 cache.go:56] Caching tarball of preloaded images
	I1107 23:11:48.509228   57021 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1107 23:11:48.511302   57021 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1107 23:11:48.513002   57021 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:11:48.626429   57021 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1107 23:12:03.243073   57021 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:12:03.243178   57021 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:12:04.257927   57021 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1107 23:12:04.258300   57021 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/config.json ...
	I1107 23:12:04.258335   57021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/config.json: {Name:mk4df66ee2fdd3a65f4e6a1581b1a45c2fc53203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:12:04.258502   57021 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:12:04.258531   57021 start.go:365] acquiring machines lock for ingress-addon-legacy-124713: {Name:mkbbf64818fd91884a069887de9cd4346aff3e23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:12:04.258573   57021 start.go:369] acquired machines lock for "ingress-addon-legacy-124713" in 33.735µs
	I1107 23:12:04.258592   57021 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-124713 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:12:04.258666   57021 start.go:125] createHost starting for "" (driver="docker")
	I1107 23:12:04.261052   57021 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1107 23:12:04.261269   57021 start.go:159] libmachine.API.Create for "ingress-addon-legacy-124713" (driver="docker")
	I1107 23:12:04.261315   57021 client.go:168] LocalClient.Create starting
	I1107 23:12:04.261431   57021 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem
	I1107 23:12:04.261477   57021 main.go:141] libmachine: Decoding PEM data...
	I1107 23:12:04.261495   57021 main.go:141] libmachine: Parsing certificate...
	I1107 23:12:04.261563   57021 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem
	I1107 23:12:04.261590   57021 main.go:141] libmachine: Decoding PEM data...
	I1107 23:12:04.261599   57021 main.go:141] libmachine: Parsing certificate...
	I1107 23:12:04.261923   57021 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-124713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 23:12:04.278199   57021 cli_runner.go:211] docker network inspect ingress-addon-legacy-124713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 23:12:04.278283   57021 network_create.go:281] running [docker network inspect ingress-addon-legacy-124713] to gather additional debugging logs...
	I1107 23:12:04.278302   57021 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-124713
	W1107 23:12:04.293895   57021 cli_runner.go:211] docker network inspect ingress-addon-legacy-124713 returned with exit code 1
	I1107 23:12:04.293923   57021 network_create.go:284] error running [docker network inspect ingress-addon-legacy-124713]: docker network inspect ingress-addon-legacy-124713: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-124713 not found
	I1107 23:12:04.293938   57021 network_create.go:286] output of [docker network inspect ingress-addon-legacy-124713]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-124713 not found
	
	** /stderr **
	I1107 23:12:04.294053   57021 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:12:04.309862   57021 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002850f30}
	I1107 23:12:04.309911   57021 network_create.go:124] attempt to create docker network ingress-addon-legacy-124713 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1107 23:12:04.309957   57021 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-124713 ingress-addon-legacy-124713
	I1107 23:12:04.363813   57021 network_create.go:108] docker network ingress-addon-legacy-124713 192.168.49.0/24 created
	I1107 23:12:04.363849   57021 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-124713" container
	I1107 23:12:04.363913   57021 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 23:12:04.379513   57021 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-124713 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-124713 --label created_by.minikube.sigs.k8s.io=true
	I1107 23:12:04.397327   57021 oci.go:103] Successfully created a docker volume ingress-addon-legacy-124713
	I1107 23:12:04.397409   57021 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-124713-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-124713 --entrypoint /usr/bin/test -v ingress-addon-legacy-124713:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 23:12:06.139498   57021 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-124713-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-124713 --entrypoint /usr/bin/test -v ingress-addon-legacy-124713:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (1.742041423s)
	I1107 23:12:06.139533   57021 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-124713
	I1107 23:12:06.139550   57021 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1107 23:12:06.139568   57021 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 23:12:06.139622   57021 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-124713:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 23:12:11.649261   57021 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-124713:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.509561677s)
	I1107 23:12:11.649297   57021 kic.go:203] duration metric: took 5.509727 seconds to extract preloaded images to volume
	W1107 23:12:11.649438   57021 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 23:12:11.649576   57021 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 23:12:11.702634   57021 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-124713 --name ingress-addon-legacy-124713 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-124713 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-124713 --network ingress-addon-legacy-124713 --ip 192.168.49.2 --volume ingress-addon-legacy-124713:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:12:12.031494   57021 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-124713 --format={{.State.Running}}
	I1107 23:12:12.049146   57021 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-124713 --format={{.State.Status}}
	I1107 23:12:12.067554   57021 cli_runner.go:164] Run: docker exec ingress-addon-legacy-124713 stat /var/lib/dpkg/alternatives/iptables
	I1107 23:12:12.132272   57021 oci.go:144] the created container "ingress-addon-legacy-124713" has a running status.
	I1107 23:12:12.132300   57021 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/ingress-addon-legacy-124713/id_rsa...
	I1107 23:12:12.235552   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/ingress-addon-legacy-124713/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1107 23:12:12.235595   57021 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17585-9432/.minikube/machines/ingress-addon-legacy-124713/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 23:12:12.255922   57021 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-124713 --format={{.State.Status}}
	I1107 23:12:12.274075   57021 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 23:12:12.274101   57021 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-124713 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 23:12:12.354770   57021 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-124713 --format={{.State.Status}}
	I1107 23:12:12.377730   57021 machine.go:88] provisioning docker machine ...
	I1107 23:12:12.377768   57021 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-124713"
	I1107 23:12:12.377830   57021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-124713
	I1107 23:12:12.395920   57021 main.go:141] libmachine: Using SSH client type: native
	I1107 23:12:12.396294   57021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1107 23:12:12.396312   57021 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-124713 && echo "ingress-addon-legacy-124713" | sudo tee /etc/hostname
	I1107 23:12:12.396961   57021 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33894->127.0.0.1:32787: read: connection reset by peer
	I1107 23:12:15.522268   57021 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-124713
	
	I1107 23:12:15.522360   57021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-124713
	I1107 23:12:15.539071   57021 main.go:141] libmachine: Using SSH client type: native
	I1107 23:12:15.539543   57021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1107 23:12:15.539569   57021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-124713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-124713/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-124713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:12:15.651881   57021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:12:15.651931   57021 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9432/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9432/.minikube}
	I1107 23:12:15.651954   57021 ubuntu.go:177] setting up certificates
	I1107 23:12:15.651965   57021 provision.go:83] configureAuth start
	I1107 23:12:15.652012   57021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-124713
	I1107 23:12:15.669714   57021 provision.go:138] copyHostCerts
	I1107 23:12:15.669758   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem
	I1107 23:12:15.669782   57021 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem, removing ...
	I1107 23:12:15.669788   57021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem
	I1107 23:12:15.669851   57021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem (1078 bytes)
	I1107 23:12:15.669920   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem
	I1107 23:12:15.669936   57021 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem, removing ...
	I1107 23:12:15.669941   57021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem
	I1107 23:12:15.669967   57021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem (1123 bytes)
	I1107 23:12:15.670009   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem
	I1107 23:12:15.670027   57021 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem, removing ...
	I1107 23:12:15.670033   57021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem
	I1107 23:12:15.670052   57021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem (1675 bytes)
	I1107 23:12:15.670131   57021 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-124713 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-124713]
	I1107 23:12:15.731703   57021 provision.go:172] copyRemoteCerts
	I1107 23:12:15.731759   57021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:12:15.731816   57021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-124713
	I1107 23:12:15.748907   57021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/ingress-addon-legacy-124713/id_rsa Username:docker}
	I1107 23:12:15.840053   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:12:15.840110   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:12:15.860985   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:12:15.861049   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1107 23:12:15.882314   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:12:15.882369   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:12:15.903403   57021 provision.go:86] duration metric: configureAuth took 251.42648ms
	I1107 23:12:15.903429   57021 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:12:15.903586   57021 config.go:182] Loaded profile config "ingress-addon-legacy-124713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1107 23:12:15.903696   57021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-124713
	I1107 23:12:15.920903   57021 main.go:141] libmachine: Using SSH client type: native
	I1107 23:12:15.921248   57021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I1107 23:12:15.921264   57021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:12:16.146556   57021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:12:16.146587   57021 machine.go:91] provisioned docker machine in 3.768832472s
	I1107 23:12:16.146599   57021 client.go:171] LocalClient.Create took 11.885272701s
	I1107 23:12:16.146619   57021 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-124713" took 11.885350987s
	I1107 23:12:16.146630   57021 start.go:300] post-start starting for "ingress-addon-legacy-124713" (driver="docker")
	I1107 23:12:16.146645   57021 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:12:16.146728   57021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:12:16.146800   57021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-124713
	I1107 23:12:16.163082   57021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/ingress-addon-legacy-124713/id_rsa Username:docker}
	I1107 23:12:16.248260   57021 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:12:16.251168   57021 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:12:16.251197   57021 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:12:16.251208   57021 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:12:16.251216   57021 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1107 23:12:16.251225   57021 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/addons for local assets ...
	I1107 23:12:16.251274   57021 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/files for local assets ...
	I1107 23:12:16.251348   57021 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem -> 162112.pem in /etc/ssl/certs
	I1107 23:12:16.251358   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem -> /etc/ssl/certs/162112.pem
	I1107 23:12:16.251449   57021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:12:16.258937   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem --> /etc/ssl/certs/162112.pem (1708 bytes)
	I1107 23:12:16.280025   57021 start.go:303] post-start completed in 133.378819ms
	I1107 23:12:16.280406   57021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-124713
	I1107 23:12:16.298451   57021 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/config.json ...
	I1107 23:12:16.298750   57021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:12:16.298803   57021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-124713
	I1107 23:12:16.314939   57021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/ingress-addon-legacy-124713/id_rsa Username:docker}
	I1107 23:12:16.400351   57021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:12:16.404293   57021 start.go:128] duration metric: createHost completed in 12.145616074s
	I1107 23:12:16.404312   57021 start.go:83] releasing machines lock for "ingress-addon-legacy-124713", held for 12.145726604s
	I1107 23:12:16.404370   57021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-124713
	I1107 23:12:16.420958   57021 ssh_runner.go:195] Run: cat /version.json
	I1107 23:12:16.421005   57021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-124713
	I1107 23:12:16.421067   57021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:12:16.421145   57021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-124713
	I1107 23:12:16.437238   57021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/ingress-addon-legacy-124713/id_rsa Username:docker}
	I1107 23:12:16.437898   57021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/ingress-addon-legacy-124713/id_rsa Username:docker}
	I1107 23:12:16.519539   57021 ssh_runner.go:195] Run: systemctl --version
	I1107 23:12:16.606184   57021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:12:16.744063   57021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:12:16.748374   57021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:12:16.766771   57021 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:12:16.766859   57021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:12:16.793992   57021 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1107 23:12:16.794016   57021 start.go:472] detecting cgroup driver to use...
	I1107 23:12:16.794054   57021 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:12:16.794102   57021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:12:16.808939   57021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:12:16.819275   57021 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:12:16.819327   57021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:12:16.831259   57021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:12:16.843481   57021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:12:16.920851   57021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:12:17.001615   57021 docker.go:219] disabling docker service ...
	I1107 23:12:17.001697   57021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:12:17.019898   57021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:12:17.030542   57021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:12:17.104785   57021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:12:17.180769   57021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:12:17.190790   57021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:12:17.205601   57021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1107 23:12:17.205670   57021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:12:17.214659   57021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:12:17.214716   57021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:12:17.224157   57021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:12:17.233133   57021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:12:17.242529   57021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:12:17.250936   57021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:12:17.258588   57021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:12:17.266938   57021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:12:17.343450   57021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:12:17.446696   57021 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:12:17.446766   57021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:12:17.450244   57021 start.go:540] Will wait 60s for crictl version
	I1107 23:12:17.450304   57021 ssh_runner.go:195] Run: which crictl
	I1107 23:12:17.453329   57021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:12:17.486010   57021 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1107 23:12:17.486098   57021 ssh_runner.go:195] Run: crio --version
	I1107 23:12:17.518645   57021 ssh_runner.go:195] Run: crio --version
	I1107 23:12:17.553777   57021 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1107 23:12:17.555446   57021 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-124713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:12:17.572131   57021 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1107 23:12:17.575714   57021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:12:17.585733   57021 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1107 23:12:17.585785   57021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:12:17.629465   57021 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1107 23:12:17.629520   57021 ssh_runner.go:195] Run: which lz4
	I1107 23:12:17.632797   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1107 23:12:17.632888   57021 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1107 23:12:17.636282   57021 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 23:12:17.636315   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1107 23:12:18.560705   57021 crio.go:444] Took 0.927851 seconds to copy over tarball
	I1107 23:12:18.560769   57021 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 23:12:20.853190   57021 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.292393282s)
	I1107 23:12:20.853216   57021 crio.go:451] Took 2.292486 seconds to extract the tarball
	I1107 23:12:20.853225   57021 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 23:12:20.921643   57021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:12:20.953550   57021 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1107 23:12:20.953571   57021 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1107 23:12:20.953633   57021 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:12:20.953675   57021 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:12:20.953649   57021 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:12:20.953733   57021 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:12:20.953753   57021 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:12:20.953713   57021 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:12:20.953805   57021 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1107 23:12:20.953832   57021 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1107 23:12:20.954860   57021 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:12:20.954868   57021 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:12:20.954901   57021 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:12:20.954901   57021 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1107 23:12:20.954869   57021 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:12:20.954927   57021 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:12:20.954919   57021 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:12:20.954928   57021 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1107 23:12:21.142981   57021 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1107 23:12:21.167568   57021 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:12:21.168165   57021 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:12:21.177080   57021 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1107 23:12:21.181243   57021 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1107 23:12:21.181863   57021 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1107 23:12:21.181903   57021 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:12:21.181939   57021 ssh_runner.go:195] Run: which crictl
	I1107 23:12:21.214555   57021 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1107 23:12:21.214601   57021 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:12:21.214653   57021 ssh_runner.go:195] Run: which crictl
	I1107 23:12:21.214662   57021 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1107 23:12:21.214697   57021 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:12:21.214736   57021 ssh_runner.go:195] Run: which crictl
	I1107 23:12:21.217670   57021 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1107 23:12:21.217729   57021 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1107 23:12:21.217766   57021 ssh_runner.go:195] Run: which crictl
	I1107 23:12:21.227588   57021 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1107 23:12:21.227630   57021 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1107 23:12:21.227668   57021 ssh_runner.go:195] Run: which crictl
	I1107 23:12:21.227712   57021 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1107 23:12:21.227726   57021 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1107 23:12:21.227786   57021 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1107 23:12:21.227837   57021 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1107 23:12:21.231847   57021 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1107 23:12:21.239506   57021 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:12:21.317558   57021 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:12:21.396233   57021 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1107 23:12:21.396333   57021 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1107 23:12:21.396418   57021 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1107 23:12:21.402208   57021 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1107 23:12:21.409645   57021 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1107 23:12:21.409696   57021 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:12:21.409749   57021 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1107 23:12:21.409770   57021 ssh_runner.go:195] Run: which crictl
	I1107 23:12:21.425648   57021 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1107 23:12:21.425700   57021 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:12:21.425704   57021 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1107 23:12:21.425735   57021 ssh_runner.go:195] Run: which crictl
	I1107 23:12:21.481567   57021 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 23:12:21.503930   57021 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1107 23:12:21.514684   57021 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1107 23:12:21.789286   57021 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:12:21.925084   57021 cache_images.go:92] LoadImages completed in 971.496101ms
	W1107 23:12:21.925165   57021 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I1107 23:12:21.925248   57021 ssh_runner.go:195] Run: crio config
	I1107 23:12:21.968940   57021 cni.go:84] Creating CNI manager for ""
	I1107 23:12:21.968961   57021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:12:21.968977   57021 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:12:21.968996   57021 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-124713 NodeName:ingress-addon-legacy-124713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1107 23:12:21.969121   57021 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-124713"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:12:21.969200   57021 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-124713 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-124713 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:12:21.969267   57021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1107 23:12:21.978053   57021 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:12:21.978116   57021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:12:21.985878   57021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1107 23:12:22.002227   57021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1107 23:12:22.018627   57021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1107 23:12:22.034949   57021 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1107 23:12:22.038125   57021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:12:22.048020   57021 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713 for IP: 192.168.49.2
	I1107 23:12:22.048051   57021 certs.go:190] acquiring lock for shared ca certs: {Name:mkbe2c97e30f744ec2581d086567acaa8822f7ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:12:22.048188   57021 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key
	I1107 23:12:22.048247   57021 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key
	I1107 23:12:22.048327   57021 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.key
	I1107 23:12:22.048343   57021 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt with IP's: []
	I1107 23:12:22.089081   57021 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt ...
	I1107 23:12:22.089112   57021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: {Name:mk1159ec5ba370dcdd11b8b43d3b992fdd4bb5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:12:22.089295   57021 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.key ...
	I1107 23:12:22.089322   57021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.key: {Name:mk582b2f93fd068029422e7e5e88c1fb12f404b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:12:22.089422   57021 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.key.dd3b5fb2
	I1107 23:12:22.089444   57021 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 23:12:22.245173   57021 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.crt.dd3b5fb2 ...
	I1107 23:12:22.245212   57021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.crt.dd3b5fb2: {Name:mk7bcdcff0930d99121a3fdbfaaca140ae693e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:12:22.245387   57021 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.key.dd3b5fb2 ...
	I1107 23:12:22.245418   57021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.key.dd3b5fb2: {Name:mk7a1cb3185cc3c97b0a0573c740468fc2ffc1bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:12:22.245509   57021 certs.go:337] copying /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.crt
	I1107 23:12:22.245612   57021 certs.go:341] copying /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.key
	I1107 23:12:22.245692   57021 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/proxy-client.key
	I1107 23:12:22.245711   57021 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/proxy-client.crt with IP's: []
	I1107 23:12:22.320269   57021 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/proxy-client.crt ...
	I1107 23:12:22.320306   57021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/proxy-client.crt: {Name:mka2c3244af42e350b6d5a0a98558012dbae61ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:12:22.320503   57021 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/proxy-client.key ...
	I1107 23:12:22.320522   57021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/proxy-client.key: {Name:mk1c2c0b62d004eabe8e89041b1c3318f99835cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:12:22.320616   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 23:12:22.320646   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 23:12:22.320662   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 23:12:22.320685   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 23:12:22.320716   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:12:22.320736   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:12:22.320755   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:12:22.320778   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:12:22.320843   57021 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211.pem (1338 bytes)
	W1107 23:12:22.320888   57021 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211_empty.pem, impossibly tiny 0 bytes
	I1107 23:12:22.320904   57021 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem (1675 bytes)
	I1107 23:12:22.320942   57021 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:12:22.320976   57021 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:12:22.321012   57021 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem (1675 bytes)
	I1107 23:12:22.321072   57021 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem (1708 bytes)
	I1107 23:12:22.321109   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211.pem -> /usr/share/ca-certificates/16211.pem
	I1107 23:12:22.321140   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem -> /usr/share/ca-certificates/162112.pem
	I1107 23:12:22.321155   57021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:12:22.321737   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:12:22.345296   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 23:12:22.367402   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:12:22.389316   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 23:12:22.410951   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:12:22.432965   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 23:12:22.454805   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:12:22.476709   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:12:22.498343   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211.pem --> /usr/share/ca-certificates/16211.pem (1338 bytes)
	I1107 23:12:22.519555   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem --> /usr/share/ca-certificates/162112.pem (1708 bytes)
	I1107 23:12:22.540330   57021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:12:22.561907   57021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:12:22.577759   57021 ssh_runner.go:195] Run: openssl version
	I1107 23:12:22.582995   57021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162112.pem && ln -fs /usr/share/ca-certificates/162112.pem /etc/ssl/certs/162112.pem"
	I1107 23:12:22.591520   57021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162112.pem
	I1107 23:12:22.594776   57021 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:08 /usr/share/ca-certificates/162112.pem
	I1107 23:12:22.594821   57021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162112.pem
	I1107 23:12:22.600963   57021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162112.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:12:22.609132   57021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:12:22.617970   57021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:12:22.620958   57021 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:12:22.621000   57021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:12:22.627052   57021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:12:22.635467   57021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16211.pem && ln -fs /usr/share/ca-certificates/16211.pem /etc/ssl/certs/16211.pem"
	I1107 23:12:22.643977   57021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16211.pem
	I1107 23:12:22.647309   57021 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:08 /usr/share/ca-certificates/16211.pem
	I1107 23:12:22.647377   57021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16211.pem
	I1107 23:12:22.653649   57021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16211.pem /etc/ssl/certs/51391683.0"
	I1107 23:12:22.662259   57021 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:12:22.665402   57021 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:12:22.665448   57021 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-124713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-124713 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:12:22.665505   57021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:12:22.665555   57021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:12:22.698423   57021 cri.go:89] found id: ""
	I1107 23:12:22.698491   57021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:12:22.706752   57021 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:12:22.714804   57021 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1107 23:12:22.714864   57021 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:12:22.722812   57021 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:12:22.722856   57021 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 23:12:22.765835   57021 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1107 23:12:22.765928   57021 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 23:12:22.804305   57021 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:12:22.804408   57021 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1107 23:12:22.804449   57021 kubeadm.go:322] OS: Linux
	I1107 23:12:22.804495   57021 kubeadm.go:322] CGROUPS_CPU: enabled
	I1107 23:12:22.804568   57021 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1107 23:12:22.804622   57021 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1107 23:12:22.804663   57021 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1107 23:12:22.804704   57021 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1107 23:12:22.804744   57021 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1107 23:12:22.872097   57021 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:12:22.872218   57021 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:12:22.872384   57021 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:12:23.058154   57021 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:12:23.059069   57021 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:12:23.059166   57021 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 23:12:23.131383   57021 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:12:23.135372   57021 out.go:204]   - Generating certificates and keys ...
	I1107 23:12:23.135511   57021 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 23:12:23.135631   57021 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 23:12:23.321341   57021 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:12:23.593751   57021 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:12:23.788816   57021 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 23:12:23.852476   57021 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 23:12:23.929703   57021 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 23:12:23.929853   57021 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-124713 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:12:23.979947   57021 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 23:12:23.980129   57021 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-124713 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 23:12:24.141587   57021 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:12:24.356808   57021 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:12:24.492769   57021 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 23:12:24.492860   57021 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:12:24.674481   57021 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:12:24.767660   57021 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:12:24.880899   57021 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:12:24.967434   57021 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:12:24.968022   57021 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:12:24.970366   57021 out.go:204]   - Booting up control plane ...
	I1107 23:12:24.970486   57021 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:12:24.974761   57021 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:12:24.975677   57021 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:12:24.976534   57021 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:12:24.978476   57021 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:12:31.980888   57021 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002400 seconds
	I1107 23:12:31.981113   57021 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:12:31.992774   57021 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:12:32.508625   57021 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:12:32.508812   57021 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-124713 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1107 23:12:33.016434   57021 kubeadm.go:322] [bootstrap-token] Using token: sf6g31.wjp4c9hk50cq9ek3
	I1107 23:12:33.018211   57021 out.go:204]   - Configuring RBAC rules ...
	I1107 23:12:33.018341   57021 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:12:33.022440   57021 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:12:33.031304   57021 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:12:33.034159   57021 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:12:33.036139   57021 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:12:33.038818   57021 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:12:33.046137   57021 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:12:33.312766   57021 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1107 23:12:33.429443   57021 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1107 23:12:33.433099   57021 kubeadm.go:322] 
	I1107 23:12:33.433213   57021 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1107 23:12:33.433261   57021 kubeadm.go:322] 
	I1107 23:12:33.433376   57021 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1107 23:12:33.433388   57021 kubeadm.go:322] 
	I1107 23:12:33.433436   57021 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1107 23:12:33.433519   57021 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:12:33.433564   57021 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:12:33.433576   57021 kubeadm.go:322] 
	I1107 23:12:33.433636   57021 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1107 23:12:33.433703   57021 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:12:33.433768   57021 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:12:33.433774   57021 kubeadm.go:322] 
	I1107 23:12:33.433936   57021 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:12:33.434046   57021 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1107 23:12:33.434064   57021 kubeadm.go:322] 
	I1107 23:12:33.434163   57021 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sf6g31.wjp4c9hk50cq9ek3 \
	I1107 23:12:33.434299   57021 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8a705d15a45dc72008d892e2cff618f0dc1a1c4f33be51e629a7afa6d45ac282 \
	I1107 23:12:33.434329   57021 kubeadm.go:322]     --control-plane 
	I1107 23:12:33.434333   57021 kubeadm.go:322] 
	I1107 23:12:33.434400   57021 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:12:33.434406   57021 kubeadm.go:322] 
	I1107 23:12:33.434476   57021 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sf6g31.wjp4c9hk50cq9ek3 \
	I1107 23:12:33.434599   57021 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:8a705d15a45dc72008d892e2cff618f0dc1a1c4f33be51e629a7afa6d45ac282 
	I1107 23:12:33.434789   57021 kubeadm.go:322] W1107 23:12:22.765290    1383 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1107 23:12:33.435024   57021 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1107 23:12:33.435182   57021 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:12:33.435370   57021 kubeadm.go:322] W1107 23:12:24.974385    1383 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 23:12:33.435531   57021 kubeadm.go:322] W1107 23:12:24.975436    1383 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 23:12:33.435547   57021 cni.go:84] Creating CNI manager for ""
	I1107 23:12:33.435556   57021 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:12:33.437822   57021 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 23:12:33.439408   57021 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:12:33.443156   57021 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1107 23:12:33.443172   57021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:12:33.459418   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:12:33.864348   57021 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:12:33.864455   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=ingress-addon-legacy-124713 minikube.k8s.io/updated_at=2023_11_07T23_12_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:33.864465   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:33.871826   57021 ops.go:34] apiserver oom_adj: -16
	I1107 23:12:33.983026   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:34.049410   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:34.617072   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:35.116631   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:35.616996   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:36.117013   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:36.616590   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:37.116602   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:37.616769   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:38.116993   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:38.617009   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:39.116857   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:39.617300   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:40.116956   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:40.616791   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:41.117045   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:41.616988   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:42.117028   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:42.616987   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:43.116929   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:43.616839   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:44.117498   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:44.616631   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:45.116629   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:45.616858   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:46.116998   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:46.616951   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:47.116699   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:47.616463   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:48.117044   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:48.617399   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:49.116603   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:49.616997   57021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:12:49.718764   57021 kubeadm.go:1081] duration metric: took 15.854371489s to wait for elevateKubeSystemPrivileges.
	I1107 23:12:49.718798   57021 kubeadm.go:406] StartCluster complete in 27.053353466s
	I1107 23:12:49.718817   57021 settings.go:142] acquiring lock: {Name:mke2e0b04eb18441805a33c4c4584e304f0bb176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:12:49.718895   57021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:12:49.719550   57021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/kubeconfig: {Name:mk2d252233a242c1461c7aa60d2f37a37a1be73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:12:49.719798   57021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:12:49.719826   57021 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:12:49.719899   57021 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-124713"
	I1107 23:12:49.719916   57021 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-124713"
	I1107 23:12:49.719971   57021 host.go:66] Checking if "ingress-addon-legacy-124713" exists ...
	I1107 23:12:49.719901   57021 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-124713"
	I1107 23:12:49.720034   57021 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-124713"
	I1107 23:12:49.720008   57021 config.go:182] Loaded profile config "ingress-addon-legacy-124713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1107 23:12:49.720379   57021 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-124713 --format={{.State.Status}}
	I1107 23:12:49.720473   57021 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-124713 --format={{.State.Status}}
	I1107 23:12:49.720580   57021 kapi.go:59] client config for ingress-addon-legacy-124713: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:12:49.721292   57021 cert_rotation.go:137] Starting client certificate rotation controller
	I1107 23:12:49.739283   57021 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-124713" context rescaled to 1 replicas
	I1107 23:12:49.739336   57021 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:12:49.741104   57021 out.go:177] * Verifying Kubernetes components...
	I1107 23:12:49.743719   57021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:12:49.745281   57021 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:12:49.745154   57021 kapi.go:59] client config for ingress-addon-legacy-124713: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:12:49.747309   57021 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-124713"
	I1107 23:12:49.747363   57021 host.go:66] Checking if "ingress-addon-legacy-124713" exists ...
	I1107 23:12:49.747847   57021 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-124713 --format={{.State.Status}}
	I1107 23:12:49.747007   57021 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:12:49.747964   57021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:12:49.748046   57021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-124713
	I1107 23:12:49.767670   57021 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:12:49.767710   57021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:12:49.767780   57021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-124713
	I1107 23:12:49.772500   57021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/ingress-addon-legacy-124713/id_rsa Username:docker}
	I1107 23:12:49.788673   57021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/ingress-addon-legacy-124713/id_rsa Username:docker}
	I1107 23:12:49.909097   57021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 23:12:49.909638   57021 kapi.go:59] client config for ingress-addon-legacy-124713: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:12:49.909978   57021 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-124713" to be "Ready" ...
	I1107 23:12:50.000239   57021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:12:50.001747   57021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:12:50.417562   57021 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1107 23:12:50.604118   57021 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1107 23:12:50.606256   57021 addons.go:502] enable addons completed in 886.424393ms: enabled=[default-storageclass storage-provisioner]
	I1107 23:12:51.919132   57021 node_ready.go:58] node "ingress-addon-legacy-124713" has status "Ready":"False"
	I1107 23:12:54.419896   57021 node_ready.go:58] node "ingress-addon-legacy-124713" has status "Ready":"False"
	I1107 23:12:56.919557   57021 node_ready.go:58] node "ingress-addon-legacy-124713" has status "Ready":"False"
	I1107 23:12:59.419634   57021 node_ready.go:58] node "ingress-addon-legacy-124713" has status "Ready":"False"
	I1107 23:13:01.919372   57021 node_ready.go:58] node "ingress-addon-legacy-124713" has status "Ready":"False"
	I1107 23:13:03.919618   57021 node_ready.go:49] node "ingress-addon-legacy-124713" has status "Ready":"True"
	I1107 23:13:03.919644   57021 node_ready.go:38] duration metric: took 14.00963339s waiting for node "ingress-addon-legacy-124713" to be "Ready" ...
	I1107 23:13:03.919654   57021 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:13:03.926663   57021 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-kbhhz" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:05.935166   57021 pod_ready.go:102] pod "coredns-66bff467f8-kbhhz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-07 23:12:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1107 23:13:08.436849   57021 pod_ready.go:102] pod "coredns-66bff467f8-kbhhz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-07 23:12:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1107 23:13:10.934464   57021 pod_ready.go:102] pod "coredns-66bff467f8-kbhhz" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-07 23:12:49 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1107 23:13:12.936803   57021 pod_ready.go:102] pod "coredns-66bff467f8-kbhhz" in "kube-system" namespace has status "Ready":"False"
	I1107 23:13:14.937639   57021 pod_ready.go:102] pod "coredns-66bff467f8-kbhhz" in "kube-system" namespace has status "Ready":"False"
	I1107 23:13:17.437024   57021 pod_ready.go:102] pod "coredns-66bff467f8-kbhhz" in "kube-system" namespace has status "Ready":"False"
	I1107 23:13:19.937094   57021 pod_ready.go:102] pod "coredns-66bff467f8-kbhhz" in "kube-system" namespace has status "Ready":"False"
	I1107 23:13:21.937639   57021 pod_ready.go:92] pod "coredns-66bff467f8-kbhhz" in "kube-system" namespace has status "Ready":"True"
	I1107 23:13:21.937668   57021 pod_ready.go:81] duration metric: took 18.010976616s waiting for pod "coredns-66bff467f8-kbhhz" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:21.937677   57021 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-124713" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:21.942201   57021 pod_ready.go:92] pod "etcd-ingress-addon-legacy-124713" in "kube-system" namespace has status "Ready":"True"
	I1107 23:13:21.942225   57021 pod_ready.go:81] duration metric: took 4.537962ms waiting for pod "etcd-ingress-addon-legacy-124713" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:21.942244   57021 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-124713" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:21.946771   57021 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-124713" in "kube-system" namespace has status "Ready":"True"
	I1107 23:13:21.946795   57021 pod_ready.go:81] duration metric: took 4.543605ms waiting for pod "kube-apiserver-ingress-addon-legacy-124713" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:21.946807   57021 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-124713" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:21.950831   57021 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-124713" in "kube-system" namespace has status "Ready":"True"
	I1107 23:13:21.950851   57021 pod_ready.go:81] duration metric: took 4.037096ms waiting for pod "kube-controller-manager-ingress-addon-legacy-124713" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:21.950860   57021 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8dlxh" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:21.955042   57021 pod_ready.go:92] pod "kube-proxy-8dlxh" in "kube-system" namespace has status "Ready":"True"
	I1107 23:13:21.955063   57021 pod_ready.go:81] duration metric: took 4.197205ms waiting for pod "kube-proxy-8dlxh" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:21.955072   57021 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-124713" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:22.133308   57021 request.go:629] Waited for 178.169866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-124713
	I1107 23:13:22.333265   57021 request.go:629] Waited for 197.207024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-124713
	I1107 23:13:22.336023   57021 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-124713" in "kube-system" namespace has status "Ready":"True"
	I1107 23:13:22.336044   57021 pod_ready.go:81] duration metric: took 380.966444ms waiting for pod "kube-scheduler-ingress-addon-legacy-124713" in "kube-system" namespace to be "Ready" ...
	I1107 23:13:22.336056   57021 pod_ready.go:38] duration metric: took 18.416392349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:13:22.336096   57021 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:13:22.336168   57021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:13:22.347071   57021 api_server.go:72] duration metric: took 32.607678992s to wait for apiserver process to appear ...
	I1107 23:13:22.347094   57021 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:13:22.347110   57021 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1107 23:13:22.351972   57021 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1107 23:13:22.352931   57021 api_server.go:141] control plane version: v1.18.20
	I1107 23:13:22.352954   57021 api_server.go:131] duration metric: took 5.853465ms to wait for apiserver health ...
	I1107 23:13:22.352962   57021 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:13:22.533516   57021 request.go:629] Waited for 180.491601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:13:22.539168   57021 system_pods.go:59] 8 kube-system pods found
	I1107 23:13:22.539202   57021 system_pods.go:61] "coredns-66bff467f8-kbhhz" [53bfea01-9676-43f7-9282-d8ce94b5181a] Running
	I1107 23:13:22.539207   57021 system_pods.go:61] "etcd-ingress-addon-legacy-124713" [fabfe823-534d-431a-ba1f-3b2cbca71d38] Running
	I1107 23:13:22.539211   57021 system_pods.go:61] "kindnet-kntts" [7972af6f-63b3-48cd-a133-3f265791252d] Running
	I1107 23:13:22.539216   57021 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-124713" [1e811b0f-8b0f-4bff-8004-d55e04279696] Running
	I1107 23:13:22.539220   57021 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-124713" [715aef4b-02e6-4560-a84c-5733fc9b8965] Running
	I1107 23:13:22.539227   57021 system_pods.go:61] "kube-proxy-8dlxh" [6cc31575-ce7f-4c8b-9ce6-61ed826a0f70] Running
	I1107 23:13:22.539231   57021 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-124713" [6a64adb7-cea3-4d26-938b-5aa1cc7f8254] Running
	I1107 23:13:22.539235   57021 system_pods.go:61] "storage-provisioner" [e7ea2840-5023-4ebe-8b1d-aa35e4e57618] Running
	I1107 23:13:22.539240   57021 system_pods.go:74] duration metric: took 186.27409ms to wait for pod list to return data ...
	I1107 23:13:22.539252   57021 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:13:22.732662   57021 request.go:629] Waited for 193.310177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1107 23:13:22.735202   57021 default_sa.go:45] found service account: "default"
	I1107 23:13:22.735228   57021 default_sa.go:55] duration metric: took 195.968669ms for default service account to be created ...
	I1107 23:13:22.735236   57021 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:13:22.932574   57021 request.go:629] Waited for 197.284701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:13:22.938601   57021 system_pods.go:86] 8 kube-system pods found
	I1107 23:13:22.938649   57021 system_pods.go:89] "coredns-66bff467f8-kbhhz" [53bfea01-9676-43f7-9282-d8ce94b5181a] Running
	I1107 23:13:22.938659   57021 system_pods.go:89] "etcd-ingress-addon-legacy-124713" [fabfe823-534d-431a-ba1f-3b2cbca71d38] Running
	I1107 23:13:22.938670   57021 system_pods.go:89] "kindnet-kntts" [7972af6f-63b3-48cd-a133-3f265791252d] Running
	I1107 23:13:22.938675   57021 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-124713" [1e811b0f-8b0f-4bff-8004-d55e04279696] Running
	I1107 23:13:22.938683   57021 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-124713" [715aef4b-02e6-4560-a84c-5733fc9b8965] Running
	I1107 23:13:22.938690   57021 system_pods.go:89] "kube-proxy-8dlxh" [6cc31575-ce7f-4c8b-9ce6-61ed826a0f70] Running
	I1107 23:13:22.938698   57021 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-124713" [6a64adb7-cea3-4d26-938b-5aa1cc7f8254] Running
	I1107 23:13:22.938706   57021 system_pods.go:89] "storage-provisioner" [e7ea2840-5023-4ebe-8b1d-aa35e4e57618] Running
	I1107 23:13:22.938718   57021 system_pods.go:126] duration metric: took 203.472162ms to wait for k8s-apps to be running ...
	I1107 23:13:22.938730   57021 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:13:22.938796   57021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:13:22.950164   57021 system_svc.go:56] duration metric: took 11.423716ms WaitForService to wait for kubelet.
	I1107 23:13:22.950197   57021 kubeadm.go:581] duration metric: took 33.210820103s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:13:22.950237   57021 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:13:23.132576   57021 request.go:629] Waited for 182.265532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1107 23:13:23.135444   57021 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 23:13:23.135470   57021 node_conditions.go:123] node cpu capacity is 8
	I1107 23:13:23.135479   57021 node_conditions.go:105] duration metric: took 185.236484ms to run NodePressure ...
	I1107 23:13:23.135490   57021 start.go:228] waiting for startup goroutines ...
	I1107 23:13:23.135496   57021 start.go:233] waiting for cluster config update ...
	I1107 23:13:23.135504   57021 start.go:242] writing updated cluster config ...
	I1107 23:13:23.135754   57021 ssh_runner.go:195] Run: rm -f paused
	I1107 23:13:23.182204   57021 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1107 23:13:23.184345   57021 out.go:177] 
	W1107 23:13:23.186173   57021 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1107 23:13:23.187912   57021 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1107 23:13:23.189568   57021 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-124713" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 07 23:16:33 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:33.756348006Z" level=info msg="Removing pod sandbox: 5ac2a5027a2d6b05d131c7956b4b3a153f7484d6162e3280d50d4ada02fead7b" id=cec1424b-bbdc-4a63-849a-516d5c6ca382 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Nov 07 23:16:33 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:33.761863717Z" level=info msg="Removed pod sandbox: 5ac2a5027a2d6b05d131c7956b4b3a153f7484d6162e3280d50d4ada02fead7b" id=cec1424b-bbdc-4a63-849a-516d5c6ca382 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Nov 07 23:16:33 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:33.762337761Z" level=info msg="Stopping pod sandbox: 773496085c3fe24cbf78b24e88079940d646b2d0f89b435ef2f336bb611b4dcf" id=e7c22119-25ad-4b80-9c7d-49f43748395a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:16:33 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:33.762375834Z" level=info msg="Stopped pod sandbox (already stopped): 773496085c3fe24cbf78b24e88079940d646b2d0f89b435ef2f336bb611b4dcf" id=e7c22119-25ad-4b80-9c7d-49f43748395a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:16:33 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:33.762695027Z" level=info msg="Removing pod sandbox: 773496085c3fe24cbf78b24e88079940d646b2d0f89b435ef2f336bb611b4dcf" id=4e4385a4-1dfb-4646-bace-86c08538ef8f name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Nov 07 23:16:33 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:33.768433866Z" level=info msg="Removed pod sandbox: 773496085c3fe24cbf78b24e88079940d646b2d0f89b435ef2f336bb611b4dcf" id=4e4385a4-1dfb-4646-bace-86c08538ef8f name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Nov 07 23:16:33 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:33.768856249Z" level=info msg="Stopping pod sandbox: 6f5007e7d5f49259d3a4b0181714da8d89308c1e6f24b122c840b0bdc17d7ede" id=67df97bb-f757-4cc3-b728-fbb87494e805 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:16:33 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:33.768884041Z" level=info msg="Stopped pod sandbox (already stopped): 6f5007e7d5f49259d3a4b0181714da8d89308c1e6f24b122c840b0bdc17d7ede" id=67df97bb-f757-4cc3-b728-fbb87494e805 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:16:33 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:33.769171198Z" level=info msg="Removing pod sandbox: 6f5007e7d5f49259d3a4b0181714da8d89308c1e6f24b122c840b0bdc17d7ede" id=eba4842f-8402-42a6-9e6f-1bb9b2baa4ad name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Nov 07 23:16:33 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:33.774432468Z" level=info msg="Removed pod sandbox: 6f5007e7d5f49259d3a4b0181714da8d89308c1e6f24b122c840b0bdc17d7ede" id=eba4842f-8402-42a6-9e6f-1bb9b2baa4ad name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Nov 07 23:16:34 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:34.888020941Z" level=warning msg="Stopping container d5a42f18be8981fa524f15fd288a917476ce2b205d4959a6f53b183a5a8c8eff with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=f1c9da4c-ec27-48e7-b683-c449f393ebdb name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 07 23:16:34 ingress-addon-legacy-124713 conmon[3494]: conmon d5a42f18be8981fa524f <ninfo>: container 3506 exited with status 137
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.073072933Z" level=info msg="Stopped container d5a42f18be8981fa524f15fd288a917476ce2b205d4959a6f53b183a5a8c8eff: ingress-nginx/ingress-nginx-controller-7fcf777cb7-6sc49/controller" id=6b66267b-5507-4f3b-9cd8-abc075b57982 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.073101362Z" level=info msg="Stopped container d5a42f18be8981fa524f15fd288a917476ce2b205d4959a6f53b183a5a8c8eff: ingress-nginx/ingress-nginx-controller-7fcf777cb7-6sc49/controller" id=f1c9da4c-ec27-48e7-b683-c449f393ebdb name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.073638045Z" level=info msg="Stopping pod sandbox: e9500c5b3fc20fcba796f4932b17d64b722e9a11d93c46e3e3c9fbb3a4935df2" id=df1706e2-e374-4560-9f48-5541d011da3e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.073720101Z" level=info msg="Stopping pod sandbox: e9500c5b3fc20fcba796f4932b17d64b722e9a11d93c46e3e3c9fbb3a4935df2" id=523caf02-87a8-49d8-b555-f184ff0044c7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.076723814Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-5NOHNHOJEJGMZNJV - [0:0]\n:KUBE-HP-L22B7Y23JLNDNHEX - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-5NOHNHOJEJGMZNJV\n-X KUBE-HP-L22B7Y23JLNDNHEX\nCOMMIT\n"
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.078205328Z" level=info msg="Closing host port tcp:80"
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.078251443Z" level=info msg="Closing host port tcp:443"
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.079239523Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.079260884Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.079381833Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-6sc49 Namespace:ingress-nginx ID:e9500c5b3fc20fcba796f4932b17d64b722e9a11d93c46e3e3c9fbb3a4935df2 UID:b3a01b29-9c07-410b-b67e-30b408098aa5 NetNS:/var/run/netns/bb94dc5d-796e-4521-9c20-96abc7c2e522 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.079516533Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-6sc49 from CNI network \"kindnet\" (type=ptp)"
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.109545038Z" level=info msg="Stopped pod sandbox: e9500c5b3fc20fcba796f4932b17d64b722e9a11d93c46e3e3c9fbb3a4935df2" id=df1706e2-e374-4560-9f48-5541d011da3e name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 07 23:16:35 ingress-addon-legacy-124713 crio[965]: time="2023-11-07 23:16:35.109648618Z" level=info msg="Stopped pod sandbox (already stopped): e9500c5b3fc20fcba796f4932b17d64b722e9a11d93c46e3e3c9fbb3a4935df2" id=523caf02-87a8-49d8-b555-f184ff0044c7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	58353179b38b0       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            23 seconds ago      Running             hello-world-app           0                   2ec01961f9231       hello-world-app-5f5d8b66bb-t6vtb
	0c4d17982a0b6       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   3ca06d00d3386       nginx
	d5a42f18be898       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   e9500c5b3fc20       ingress-nginx-controller-7fcf777cb7-6sc49
	5b99fdedd2bbd       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   2f4d977a03e0a       coredns-66bff467f8-kbhhz
	6219699842b22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   77cc097df57ee       storage-provisioner
	85b34b29b30ba       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   9fd76a9e8eb36       kindnet-kntts
	44231d7ba2b1c       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   73d374ac1ac09       kube-proxy-8dlxh
	ff00ecfd08e19       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   abdbac1a1eeb2       kube-controller-manager-ingress-addon-legacy-124713
	6597214012cfd       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   cc56b19501be8       etcd-ingress-addon-legacy-124713
	1c007eca065a4       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   df01e97ef3718       kube-apiserver-ingress-addon-legacy-124713
	2873c3ccc78aa       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   f1984e84bbd1d       kube-scheduler-ingress-addon-legacy-124713
	
	* 
	* ==> coredns [5b99fdedd2bbdcf36a6a87d18a932c15a67f23613108dc9ab5523ed5b56f82f9] <==
	* [INFO] 10.244.0.5:52741 - 29141 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000197661s
	[INFO] 10.244.0.5:36228 - 10419 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004214702s
	[INFO] 10.244.0.5:46090 - 44056 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004893905s
	[INFO] 10.244.0.5:48059 - 25794 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004900229s
	[INFO] 10.244.0.5:56818 - 63793 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005142959s
	[INFO] 10.244.0.5:35885 - 49575 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005059737s
	[INFO] 10.244.0.5:53140 - 23191 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004474023s
	[INFO] 10.244.0.5:49197 - 24632 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004978794s
	[INFO] 10.244.0.5:44904 - 8068 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004560562s
	[INFO] 10.244.0.5:53140 - 48239 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004139424s
	[INFO] 10.244.0.5:35885 - 7721 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004350963s
	[INFO] 10.244.0.5:46090 - 14906 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004509478s
	[INFO] 10.244.0.5:56818 - 29100 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004481497s
	[INFO] 10.244.0.5:36228 - 27542 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004669419s
	[INFO] 10.244.0.5:53140 - 48010 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000076911s
	[INFO] 10.244.0.5:44904 - 57499 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004425118s
	[INFO] 10.244.0.5:35885 - 4556 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073185s
	[INFO] 10.244.0.5:36228 - 39297 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053856s
	[INFO] 10.244.0.5:46090 - 2555 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00017336s
	[INFO] 10.244.0.5:49197 - 46065 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004496046s
	[INFO] 10.244.0.5:56818 - 31062 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000199796s
	[INFO] 10.244.0.5:48059 - 47348 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004923913s
	[INFO] 10.244.0.5:44904 - 41767 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052429s
	[INFO] 10.244.0.5:49197 - 4040 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070346s
	[INFO] 10.244.0.5:48059 - 51976 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000065974s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-124713
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-124713
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=ingress-addon-legacy-124713
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_12_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:12:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-124713
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:16:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:16:33 +0000   Tue, 07 Nov 2023 23:12:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:16:33 +0000   Tue, 07 Nov 2023 23:12:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:16:33 +0000   Tue, 07 Nov 2023 23:12:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:16:33 +0000   Tue, 07 Nov 2023 23:13:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-124713
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 580c51a074ee40c98b389326bedb7af8
	  System UUID:                7ab6a17c-13af-4344-a61e-9afa61711669
	  Boot ID:                    c97cc438-dd92-4788-91bf-3e8db350d4d3
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-t6vtb                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-kbhhz                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m51s
	  kube-system                 etcd-ingress-addon-legacy-124713                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kindnet-kntts                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m51s
	  kube-system                 kube-apiserver-ingress-addon-legacy-124713             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-124713    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-proxy-8dlxh                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-scheduler-ingress-addon-legacy-124713             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m7s   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m7s   kubelet     Node ingress-addon-legacy-124713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s   kubelet     Node ingress-addon-legacy-124713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s   kubelet     Node ingress-addon-legacy-124713 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m50s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m37s  kubelet     Node ingress-addon-legacy-124713 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004919] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006591] FS-Cache: N-cookie d=0000000093c421a3{9p.inode} n=00000000fd5ed719
	[  +0.007437] FS-Cache: N-key=[8] '8aa00f0200000000'
	[  +0.258455] FS-Cache: Duplicate cookie detected
	[  +0.004692] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006756] FS-Cache: O-cookie d=0000000093c421a3{9p.inode} n=0000000051a2543c
	[  +0.007368] FS-Cache: O-key=[8] '97a00f0200000000'
	[  +0.004932] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006594] FS-Cache: N-cookie d=0000000093c421a3{9p.inode} n=00000000c900a7a4
	[  +0.008778] FS-Cache: N-key=[8] '97a00f0200000000'
	[  +8.636027] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 7 23:14] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[  +1.023718] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[  +2.019776] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[  +4.091691] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[  +8.191452] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[ +16.126809] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[Nov 7 23:15] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	
	* 
	* ==> etcd [6597214012cfd3e9fe2413e9d354dc6757f98dabdbef5d2bd89d411b17bcbb39] <==
	* raft2023/11/07 23:12:26 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/07 23:12:26 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/07 23:12:26 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-07 23:12:26.415941 W | auth: simple token is not cryptographically signed
	2023-11-07 23:12:26.484897 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-07 23:12:26.486080 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/07 23:12:26 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-07 23:12:26.486553 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-07 23:12:26.487541 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-07 23:12:26.487696 I | embed: listening for peers on 192.168.49.2:2380
	2023-11-07 23:12:26.487797 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/11/07 23:12:27 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/07 23:12:27 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/07 23:12:27 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/07 23:12:27 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/07 23:12:27 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-07 23:12:27.110202 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-07 23:12:27.110959 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-07 23:12:27.111009 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-07 23:12:27.111045 I | embed: ready to serve client requests
	2023-11-07 23:12:27.111142 I | etcdserver: published {Name:ingress-addon-legacy-124713 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-07 23:12:27.111155 I | embed: ready to serve client requests
	2023-11-07 23:12:27.113056 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-07 23:12:27.113232 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-07 23:14:07.851595 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-ingress-dns-minikube.17957a38d716c95d\" " with result "range_response_count:1 size:815" took too long (160.197012ms) to execute
	
	* 
	* ==> kernel <==
	*  23:16:40 up 59 min,  0 users,  load average: 0.30, 0.66, 0.53
	Linux ingress-addon-legacy-124713 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [85b34b29b30badf81dbf09aef55836183f59adbc3fd07dcbccbc644ab37e04d6] <==
	* I1107 23:14:36.791295       1 main.go:227] handling current node
	I1107 23:14:46.796118       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:14:46.796146       1 main.go:227] handling current node
	I1107 23:14:56.799290       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:14:56.799313       1 main.go:227] handling current node
	I1107 23:15:06.811482       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:15:06.811511       1 main.go:227] handling current node
	I1107 23:15:16.815077       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:15:16.815102       1 main.go:227] handling current node
	I1107 23:15:26.827043       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:15:26.827084       1 main.go:227] handling current node
	I1107 23:15:36.834991       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:15:36.835017       1 main.go:227] handling current node
	I1107 23:15:46.846936       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:15:46.846961       1 main.go:227] handling current node
	I1107 23:15:56.850513       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:15:56.850539       1 main.go:227] handling current node
	I1107 23:16:06.862734       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:16:06.862778       1 main.go:227] handling current node
	I1107 23:16:16.866623       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:16:16.866650       1 main.go:227] handling current node
	I1107 23:16:26.874542       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:16:26.874568       1 main.go:227] handling current node
	I1107 23:16:36.882882       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1107 23:16:36.882907       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [1c007eca065a45f142509fce6ed9245fdff404748897ac2521e9abb19f0339e9] <==
	* I1107 23:12:30.371235       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1107 23:12:30.480761       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1107 23:12:30.480873       1 cache.go:39] Caches are synced for autoregister controller
	I1107 23:12:30.481070       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1107 23:12:30.481154       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1107 23:12:30.481204       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1107 23:12:31.370058       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1107 23:12:31.370099       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1107 23:12:31.374772       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1107 23:12:31.377593       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1107 23:12:31.377618       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1107 23:12:31.767174       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 23:12:31.811075       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1107 23:12:31.927023       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1107 23:12:31.927988       1 controller.go:609] quota admission added evaluator for: endpoints
	I1107 23:12:31.931162       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 23:12:32.662934       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1107 23:12:33.303990       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1107 23:12:33.420675       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1107 23:12:33.657164       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 23:12:49.483460       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1107 23:12:49.484767       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1107 23:13:23.892633       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1107 23:13:52.759352       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1107 23:16:32.115205       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc0097a7230), encoder:(*versioning.codec)(0xc0091f9ea0), buf:(*bytes.Buffer)(0xc00b50be30)})
	
	* 
	* ==> kube-controller-manager [ff00ecfd08e19745ceccf62bb14e960a0d2a13ddc602489aabefbfc2776aebe1] <==
	* reemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e6d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000bb1978)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1107 23:12:49.583087       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"9e79c199-06c1-435a-aeaf-e335acac3250", ResourceVersion:"234", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63834995553, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001a4e5e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001a4e600)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001a4e620), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a4e640), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a4e660), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a4e680), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001a4e6a0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001a4e6e0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0013631d0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000bb1b88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000522700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e6d8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000bb1bd0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1107 23:12:49.679923       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1107 23:12:49.722921       1 shared_informer.go:230] Caches are synced for disruption 
	I1107 23:12:49.722944       1 disruption.go:339] Sending events to api server.
	I1107 23:12:49.743903       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"49db8e0a-46f5-46ed-9723-8cfea4743691", APIVersion:"apps/v1", ResourceVersion:"354", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1107 23:12:49.753655       1 shared_informer.go:230] Caches are synced for attach detach 
	I1107 23:12:49.755541       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d26097f7-e773-4c78-8455-282b192d338e", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-s899f
	I1107 23:12:49.879925       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I1107 23:12:49.990597       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1107 23:12:50.080476       1 shared_informer.go:230] Caches are synced for resource quota 
	I1107 23:12:50.080479       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1107 23:12:50.080560       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1107 23:12:50.088576       1 shared_informer.go:230] Caches are synced for resource quota 
	I1107 23:13:04.440182       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1107 23:13:23.883789       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"245266f5-1b1a-4ac0-be95-18fbe57b9bca", APIVersion:"apps/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1107 23:13:23.890131       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"7cf511c1-40a7-4a58-a169-02a16935f592", APIVersion:"apps/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-6sc49
	I1107 23:13:23.907193       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3f46112e-97b1-4bd6-8679-8a0667d23270", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-gwpbn
	I1107 23:13:23.985396       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"a418da1d-378d-4bb3-af50-f7c8bee5c5f8", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-h9rhp
	I1107 23:13:30.782156       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3f46112e-97b1-4bd6-8679-8a0667d23270", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1107 23:13:46.818574       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"a418da1d-378d-4bb3-af50-f7c8bee5c5f8", APIVersion:"batch/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1107 23:16:14.416501       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"a6600a44-f6cd-40c1-9607-6a6e1a6f5af1", APIVersion:"apps/v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1107 23:16:14.421264       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"2e2afd4b-bc7b-421d-b192-eed43aba01c6", APIVersion:"apps/v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-t6vtb
	E1107 23:16:37.577153       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-9nt2d" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [44231d7ba2b1c245b42490d3e46bab910ce94b4256079453f0060288a8e65db6] <==
	* W1107 23:12:50.494287       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1107 23:12:50.502718       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1107 23:12:50.502761       1 server_others.go:186] Using iptables Proxier.
	I1107 23:12:50.503019       1 server.go:583] Version: v1.18.20
	I1107 23:12:50.503528       1 config.go:133] Starting endpoints config controller
	I1107 23:12:50.503609       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1107 23:12:50.503572       1 config.go:315] Starting service config controller
	I1107 23:12:50.503706       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1107 23:12:50.603823       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1107 23:12:50.603827       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [2873c3ccc78aa44262d8e8eb3fe9d9b11ab4a1b7661bdb002b1227216ff6b98b] <==
	* W1107 23:12:30.403816       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1107 23:12:30.403823       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1107 23:12:30.485224       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1107 23:12:30.485328       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1107 23:12:30.487904       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1107 23:12:30.487943       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1107 23:12:30.489925       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1107 23:12:30.490839       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1107 23:12:30.490902       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1107 23:12:30.493440       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 23:12:30.493540       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:12:30.493637       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1107 23:12:30.493716       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:12:30.493823       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:12:30.493845       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:12:30.493924       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:12:30.493954       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:12:30.494007       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1107 23:12:30.494054       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:12:30.494176       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:12:31.460572       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:12:31.480973       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:12:31.581756       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1107 23:12:31.588364       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1107 23:12:32.088279       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Nov 07 23:16:23 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:23.686094    1864 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 07 23:16:23 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:23.686153    1864 pod_workers.go:191] Error syncing pod 196ae899-9346-4a6d-8d82-1e244404e6e2 ("kube-ingress-dns-minikube_kube-system(196ae899-9346-4a6d-8d82-1e244404e6e2)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 07 23:16:30 ingress-addon-legacy-124713 kubelet[1864]: I1107 23:16:30.311790    1864 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-q68ct" (UniqueName: "kubernetes.io/secret/196ae899-9346-4a6d-8d82-1e244404e6e2-minikube-ingress-dns-token-q68ct") pod "196ae899-9346-4a6d-8d82-1e244404e6e2" (UID: "196ae899-9346-4a6d-8d82-1e244404e6e2")
	Nov 07 23:16:30 ingress-addon-legacy-124713 kubelet[1864]: I1107 23:16:30.314009    1864 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196ae899-9346-4a6d-8d82-1e244404e6e2-minikube-ingress-dns-token-q68ct" (OuterVolumeSpecName: "minikube-ingress-dns-token-q68ct") pod "196ae899-9346-4a6d-8d82-1e244404e6e2" (UID: "196ae899-9346-4a6d-8d82-1e244404e6e2"). InnerVolumeSpecName "minikube-ingress-dns-token-q68ct". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:16:30 ingress-addon-legacy-124713 kubelet[1864]: I1107 23:16:30.412162    1864 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-q68ct" (UniqueName: "kubernetes.io/secret/196ae899-9346-4a6d-8d82-1e244404e6e2-minikube-ingress-dns-token-q68ct") on node "ingress-addon-legacy-124713" DevicePath ""
	Nov 07 23:16:32 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:32.878506    1864 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-6sc49.17957a615615705f", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-6sc49", UID:"b3a01b29-9c07-410b-b67e-30b408098aa5", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-124713"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14acf343444505f, ext:239606709648, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14acf343444505f, ext:239606709648, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-6sc49.17957a615615705f" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 07 23:16:32 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:32.887066    1864 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-6sc49.17957a615615705f", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-6sc49", UID:"b3a01b29-9c07-410b-b67e-30b408098aa5", APIVersion:"v1", ResourceVersion:"481", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-124713"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14acf343444505f, ext:239606709648, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14acf34346f8041, ext:239609539960, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-6sc49.17957a615615705f" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 07 23:16:33 ingress-addon-legacy-124713 kubelet[1864]: I1107 23:16:33.688406    1864 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: ed2115f8c776f5b87f686c8fc3b11772ccee01b18b9b6183bec543f806c5466c
	Nov 07 23:16:33 ingress-addon-legacy-124713 kubelet[1864]: I1107 23:16:33.705557    1864 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 423fe32cdf92b257d8ad283631be11803c640620d54880108d22083232d6e7ad
	Nov 07 23:16:33 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:33.731707    1864 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a71d7e58c3cc448bebd8a85913fd0e7505f80732464476b7ecd371628b633e5c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a71d7e58c3cc448bebd8a85913fd0e7505f80732464476b7ecd371628b633e5c/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-admission-patch-h9rhp_46fb9655-5d63-4649-911e-2600960bd93e/patch/2.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-h9rhp_46fb9655-5d63-4649-911e-2600960bd93e/patch/2.log: no such file or directory
	Nov 07 23:16:33 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:33.793624    1864 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/17bed3a33ba0ecc22f2821dd4d8eb5c7e726fc2c30dfd11059258a9e023a4451/diff" to get inode usage: stat /var/lib/containers/storage/overlay/17bed3a33ba0ecc22f2821dd4d8eb5c7e726fc2c30dfd11059258a9e023a4451/diff: no such file or directory, extraDiskErr: <nil>
	Nov 07 23:16:33 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:33.882760    1864 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/17bed3a33ba0ecc22f2821dd4d8eb5c7e726fc2c30dfd11059258a9e023a4451/diff" to get inode usage: stat /var/lib/containers/storage/overlay/17bed3a33ba0ecc22f2821dd4d8eb5c7e726fc2c30dfd11059258a9e023a4451/diff: no such file or directory, extraDiskErr: <nil>
	Nov 07 23:16:33 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:33.890184    1864 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f90299ee33c1ae2f7fb898e86befda7e946bdacad20dadb254a565ea7ccf8a60/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f90299ee33c1ae2f7fb898e86befda7e946bdacad20dadb254a565ea7ccf8a60/diff: no such file or directory, extraDiskErr: <nil>
	Nov 07 23:16:33 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:33.982465    1864 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f90299ee33c1ae2f7fb898e86befda7e946bdacad20dadb254a565ea7ccf8a60/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f90299ee33c1ae2f7fb898e86befda7e946bdacad20dadb254a565ea7ccf8a60/diff: no such file or directory, extraDiskErr: <nil>
	Nov 07 23:16:35 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:35.069568    1864 remote_runtime.go:508] ReopenContainerLog "d5a42f18be8981fa524f15fd288a917476ce2b205d4959a6f53b183a5a8c8eff" from runtime service failed: rpc error: code = Unknown desc = container is not created or running
	Nov 07 23:16:35 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:35.069614    1864 container_log_manager.go:243] Container "d5a42f18be8981fa524f15fd288a917476ce2b205d4959a6f53b183a5a8c8eff" log "/var/log/pods/ingress-nginx_ingress-nginx-controller-7fcf777cb7-6sc49_b3a01b29-9c07-410b-b67e-30b408098aa5/controller/0.log" doesn't exist, reopen container log failed: rpc error: code = Unknown desc = container is not created or running
	Nov 07 23:16:35 ingress-addon-legacy-124713 kubelet[1864]: W1107 23:16:35.353490    1864 container.go:526] Failed to update stats for container "/docker/4b5353b4127d8b3e7248d2268637a1ca66ac3e1e16da8599f3681b53489bb90e/crio-6f5007e7d5f49259d3a4b0181714da8d89308c1e6f24b122c840b0bdc17d7ede": unable to determine device info for dir: /var/lib/containers/storage/overlay/f90299ee33c1ae2f7fb898e86befda7e946bdacad20dadb254a565ea7ccf8a60/diff: stat failed on /var/lib/containers/storage/overlay/f90299ee33c1ae2f7fb898e86befda7e946bdacad20dadb254a565ea7ccf8a60/diff with error: no such file or directory, continuing to push stats
	Nov 07 23:16:36 ingress-addon-legacy-124713 kubelet[1864]: W1107 23:16:36.113680    1864 pod_container_deletor.go:77] Container "e9500c5b3fc20fcba796f4932b17d64b722e9a11d93c46e3e3c9fbb3a4935df2" not found in pod's containers
	Nov 07 23:16:37 ingress-addon-legacy-124713 kubelet[1864]: I1107 23:16:37.029281    1864 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-cmhvg" (UniqueName: "kubernetes.io/secret/b3a01b29-9c07-410b-b67e-30b408098aa5-ingress-nginx-token-cmhvg") pod "b3a01b29-9c07-410b-b67e-30b408098aa5" (UID: "b3a01b29-9c07-410b-b67e-30b408098aa5")
	Nov 07 23:16:37 ingress-addon-legacy-124713 kubelet[1864]: I1107 23:16:37.029333    1864 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b3a01b29-9c07-410b-b67e-30b408098aa5-webhook-cert") pod "b3a01b29-9c07-410b-b67e-30b408098aa5" (UID: "b3a01b29-9c07-410b-b67e-30b408098aa5")
	Nov 07 23:16:37 ingress-addon-legacy-124713 kubelet[1864]: I1107 23:16:37.031218    1864 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a01b29-9c07-410b-b67e-30b408098aa5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b3a01b29-9c07-410b-b67e-30b408098aa5" (UID: "b3a01b29-9c07-410b-b67e-30b408098aa5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:16:37 ingress-addon-legacy-124713 kubelet[1864]: I1107 23:16:37.031383    1864 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a01b29-9c07-410b-b67e-30b408098aa5-ingress-nginx-token-cmhvg" (OuterVolumeSpecName: "ingress-nginx-token-cmhvg") pod "b3a01b29-9c07-410b-b67e-30b408098aa5" (UID: "b3a01b29-9c07-410b-b67e-30b408098aa5"). InnerVolumeSpecName "ingress-nginx-token-cmhvg". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 07 23:16:37 ingress-addon-legacy-124713 kubelet[1864]: E1107 23:16:37.091129    1864 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: <nil>, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-controller-7fcf777cb7-6sc49_b3a01b29-9c07-410b-b67e-30b408098aa5/controller/0.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-controller-7fcf777cb7-6sc49_b3a01b29-9c07-410b-b67e-30b408098aa5/controller/0.log: no such file or directory
	Nov 07 23:16:37 ingress-addon-legacy-124713 kubelet[1864]: I1107 23:16:37.129605    1864 reconciler.go:319] Volume detached for volume "ingress-nginx-token-cmhvg" (UniqueName: "kubernetes.io/secret/b3a01b29-9c07-410b-b67e-30b408098aa5-ingress-nginx-token-cmhvg") on node "ingress-addon-legacy-124713" DevicePath ""
	Nov 07 23:16:37 ingress-addon-legacy-124713 kubelet[1864]: I1107 23:16:37.129643    1864 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/b3a01b29-9c07-410b-b67e-30b408098aa5-webhook-cert") on node "ingress-addon-legacy-124713" DevicePath ""
	
	* 
	* ==> storage-provisioner [6219699842b22105cb172fd77bd53c3706b8c471dc2a36db4b236143f7d854b1] <==
	* I1107 23:13:08.834574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1107 23:13:08.843419       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1107 23:13:08.843466       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1107 23:13:08.849307       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1107 23:13:08.849374       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5be8e0a6-d04d-453b-9a2d-240043e6dc89", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-124713_319f2bf4-1c2e-4af9-a32e-e6b9ffd39463 became leader
	I1107 23:13:08.849474       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-124713_319f2bf4-1c2e-4af9-a32e-e6b9ffd39463!
	I1107 23:13:08.950148       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-124713_319f2bf4-1c2e-4af9-a32e-e6b9ffd39463!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-124713 -n ingress-addon-legacy-124713
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-124713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (182.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-7phrb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-7phrb -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-7phrb -- sh -c "ping -c 1 192.168.58.1": exit status 1 (192.123314ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-7phrb): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-n8tmh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-n8tmh -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-n8tmh -- sh -c "ping -c 1 192.168.58.1": exit status 1 (200.601353ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-n8tmh): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-542158
helpers_test.go:235: (dbg) docker inspect multinode-542158:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2",
	        "Created": "2023-11-07T23:21:28.244247545Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 104136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:21:28.529878484Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dbc648475405a75e8c472743ce721cb0b74db98d9501831a17a27a54e2bd3e47",
	        "ResolvConfPath": "/var/lib/docker/containers/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2/hosts",
	        "LogPath": "/var/lib/docker/containers/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2-json.log",
	        "Name": "/multinode-542158",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-542158:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-542158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7b543519a4d30b3f9dc039e3c23905c73fa10465ac4ef52bee0e4e89617b068d-init/diff:/var/lib/docker/overlay2/ae2a32444c6a9314aa09825baf7df8a89e3a23e782d3f3ba648a13de53e3f1b1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b543519a4d30b3f9dc039e3c23905c73fa10465ac4ef52bee0e4e89617b068d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b543519a4d30b3f9dc039e3c23905c73fa10465ac4ef52bee0e4e89617b068d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b543519a4d30b3f9dc039e3c23905c73fa10465ac4ef52bee0e4e89617b068d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-542158",
	                "Source": "/var/lib/docker/volumes/multinode-542158/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-542158",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-542158",
	                "name.minikube.sigs.k8s.io": "multinode-542158",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "604d4aeb2ce98252a7ced58c3bfb30f07a32eb926b90b9a09e294badedab5da5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/604d4aeb2ce9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-542158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7dbe1742d15c",
	                        "multinode-542158"
	                    ],
	                    "NetworkID": "8dde1ced16eff461e9660c6236158c439aabcaa283b01f03cae66917ea891fc3",
	                    "EndpointID": "32e57dff69158bd1ce62ad93c9e6c3cf03b6b5977d863fc9bb41b624f72e91d6",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-542158 -n multinode-542158
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-542158 logs -n 25: (1.154586408s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-609122                           | mount-start-2-609122 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-609122 ssh -- ls                    | mount-start-2-609122 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-596467                           | mount-start-1-596467 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-609122 ssh -- ls                    | mount-start-2-609122 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-609122                           | mount-start-2-609122 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	| start   | -p mount-start-2-609122                           | mount-start-2-609122 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	| ssh     | mount-start-2-609122 ssh -- ls                    | mount-start-2-609122 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-609122                           | mount-start-2-609122 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	| delete  | -p mount-start-1-596467                           | mount-start-1-596467 | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:21 UTC |
	| start   | -p multinode-542158                               | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:21 UTC | 07 Nov 23 23:22 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- apply -f                   | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- rollout                    | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- get pods -o                | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- get pods -o                | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- exec                       | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | busybox-5bc68d56bd-7phrb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- exec                       | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | busybox-5bc68d56bd-n8tmh --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- exec                       | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | busybox-5bc68d56bd-7phrb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- exec                       | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | busybox-5bc68d56bd-n8tmh --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- exec                       | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | busybox-5bc68d56bd-7phrb -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- exec                       | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | busybox-5bc68d56bd-n8tmh -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- get pods -o                | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- exec                       | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | busybox-5bc68d56bd-7phrb                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- exec                       | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC |                     |
	|         | busybox-5bc68d56bd-7phrb -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- exec                       | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC | 07 Nov 23 23:22 UTC |
	|         | busybox-5bc68d56bd-n8tmh                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-542158 -- exec                       | multinode-542158     | jenkins | v1.32.0 | 07 Nov 23 23:22 UTC |                     |
	|         | busybox-5bc68d56bd-n8tmh -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:21:22
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:21:22.231010  103523 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:21:22.231306  103523 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:21:22.231316  103523 out.go:309] Setting ErrFile to fd 2...
	I1107 23:21:22.231323  103523 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:21:22.231548  103523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	I1107 23:21:22.232229  103523 out.go:303] Setting JSON to false
	I1107 23:21:22.233387  103523 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3832,"bootTime":1699395450,"procs":524,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:21:22.233447  103523 start.go:138] virtualization: kvm guest
	I1107 23:21:22.235985  103523 out.go:177] * [multinode-542158] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:21:22.237669  103523 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:21:22.237705  103523 notify.go:220] Checking for updates...
	I1107 23:21:22.239565  103523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:21:22.241253  103523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:21:22.242967  103523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	I1107 23:21:22.244565  103523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:21:22.246295  103523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:21:22.247957  103523 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:21:22.270304  103523 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:21:22.270435  103523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:21:22.322241  103523 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-07 23:21:22.313423401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:21:22.322372  103523 docker.go:295] overlay module found
	I1107 23:21:22.324541  103523 out.go:177] * Using the docker driver based on user configuration
	I1107 23:21:22.326471  103523 start.go:298] selected driver: docker
	I1107 23:21:22.326491  103523 start.go:902] validating driver "docker" against <nil>
	I1107 23:21:22.326504  103523 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:21:22.327396  103523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:21:22.380506  103523 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-07 23:21:22.372188305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:21:22.380662  103523 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:21:22.380880  103523 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 23:21:22.382928  103523 out.go:177] * Using Docker driver with root privileges
	I1107 23:21:22.384537  103523 cni.go:84] Creating CNI manager for ""
	I1107 23:21:22.384558  103523 cni.go:136] 0 nodes found, recommending kindnet
	I1107 23:21:22.384572  103523 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:21:22.384587  103523 start_flags.go:323] config:
	{Name:multinode-542158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-542158 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:21:22.386192  103523 out.go:177] * Starting control plane node multinode-542158 in cluster multinode-542158
	I1107 23:21:22.387562  103523 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:21:22.388985  103523 out.go:177] * Pulling base image ...
	I1107 23:21:22.390193  103523 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:21:22.390236  103523 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1107 23:21:22.390249  103523 cache.go:56] Caching tarball of preloaded images
	I1107 23:21:22.390301  103523 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:21:22.390349  103523 preload.go:174] Found /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1107 23:21:22.390360  103523 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:21:22.390739  103523 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/config.json ...
	I1107 23:21:22.390772  103523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/config.json: {Name:mkd4f588557a90546eec6c5ca8c3f0c383d96f18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:21:22.406495  103523 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 23:21:22.406517  103523 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 23:21:22.406528  103523 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:21:22.406558  103523 start.go:365] acquiring machines lock for multinode-542158: {Name:mk4fed3af343cdeec7a3c5cd3784dda33e97a0d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:21:22.406651  103523 start.go:369] acquired machines lock for "multinode-542158" in 74.432µs
	I1107 23:21:22.406678  103523 start.go:93] Provisioning new machine with config: &{Name:multinode-542158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-542158 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:21:22.406745  103523 start.go:125] createHost starting for "" (driver="docker")
	I1107 23:21:22.409105  103523 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 23:21:22.409320  103523 start.go:159] libmachine.API.Create for "multinode-542158" (driver="docker")
	I1107 23:21:22.409351  103523 client.go:168] LocalClient.Create starting
	I1107 23:21:22.409402  103523 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem
	I1107 23:21:22.409435  103523 main.go:141] libmachine: Decoding PEM data...
	I1107 23:21:22.409449  103523 main.go:141] libmachine: Parsing certificate...
	I1107 23:21:22.409494  103523 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem
	I1107 23:21:22.409517  103523 main.go:141] libmachine: Decoding PEM data...
	I1107 23:21:22.409532  103523 main.go:141] libmachine: Parsing certificate...
	I1107 23:21:22.409814  103523 cli_runner.go:164] Run: docker network inspect multinode-542158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 23:21:22.426702  103523 cli_runner.go:211] docker network inspect multinode-542158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 23:21:22.426779  103523 network_create.go:281] running [docker network inspect multinode-542158] to gather additional debugging logs...
	I1107 23:21:22.426800  103523 cli_runner.go:164] Run: docker network inspect multinode-542158
	W1107 23:21:22.443064  103523 cli_runner.go:211] docker network inspect multinode-542158 returned with exit code 1
	I1107 23:21:22.443094  103523 network_create.go:284] error running [docker network inspect multinode-542158]: docker network inspect multinode-542158: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-542158 not found
	I1107 23:21:22.443122  103523 network_create.go:286] output of [docker network inspect multinode-542158]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-542158 not found
	
	** /stderr **
	I1107 23:21:22.443251  103523 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:21:22.460329  103523 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d07f9e76d4df IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:51:57:2d:84} reservation:<nil>}
	I1107 23:21:22.460881  103523 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00289e590}
	I1107 23:21:22.460915  103523 network_create.go:124] attempt to create docker network multinode-542158 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1107 23:21:22.460967  103523 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-542158 multinode-542158
	I1107 23:21:22.515421  103523 network_create.go:108] docker network multinode-542158 192.168.58.0/24 created
	I1107 23:21:22.515449  103523 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-542158" container
	I1107 23:21:22.515503  103523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 23:21:22.532411  103523 cli_runner.go:164] Run: docker volume create multinode-542158 --label name.minikube.sigs.k8s.io=multinode-542158 --label created_by.minikube.sigs.k8s.io=true
	I1107 23:21:22.549953  103523 oci.go:103] Successfully created a docker volume multinode-542158
	I1107 23:21:22.550030  103523 cli_runner.go:164] Run: docker run --rm --name multinode-542158-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-542158 --entrypoint /usr/bin/test -v multinode-542158:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 23:21:23.055358  103523 oci.go:107] Successfully prepared a docker volume multinode-542158
	I1107 23:21:23.055400  103523 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:21:23.055432  103523 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 23:21:23.055493  103523 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-542158:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 23:21:28.177621  103523 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-542158:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.122058836s)
	I1107 23:21:28.177665  103523 kic.go:203] duration metric: took 5.122237 seconds to extract preloaded images to volume
	W1107 23:21:28.177855  103523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 23:21:28.177952  103523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 23:21:28.228938  103523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-542158 --name multinode-542158 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-542158 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-542158 --network multinode-542158 --ip 192.168.58.2 --volume multinode-542158:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:21:28.538128  103523 cli_runner.go:164] Run: docker container inspect multinode-542158 --format={{.State.Running}}
	I1107 23:21:28.555095  103523 cli_runner.go:164] Run: docker container inspect multinode-542158 --format={{.State.Status}}
	I1107 23:21:28.573507  103523 cli_runner.go:164] Run: docker exec multinode-542158 stat /var/lib/dpkg/alternatives/iptables
	I1107 23:21:28.627913  103523 oci.go:144] the created container "multinode-542158" has a running status.
	I1107 23:21:28.627982  103523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa...
	I1107 23:21:28.863683  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1107 23:21:28.863731  103523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 23:21:28.885690  103523 cli_runner.go:164] Run: docker container inspect multinode-542158 --format={{.State.Status}}
	I1107 23:21:28.905556  103523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 23:21:28.905582  103523 kic_runner.go:114] Args: [docker exec --privileged multinode-542158 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 23:21:28.993354  103523 cli_runner.go:164] Run: docker container inspect multinode-542158 --format={{.State.Status}}
	I1107 23:21:29.012673  103523 machine.go:88] provisioning docker machine ...
	I1107 23:21:29.012713  103523 ubuntu.go:169] provisioning hostname "multinode-542158"
	I1107 23:21:29.012793  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:21:29.035722  103523 main.go:141] libmachine: Using SSH client type: native
	I1107 23:21:29.036143  103523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1107 23:21:29.036170  103523 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-542158 && echo "multinode-542158" | sudo tee /etc/hostname
	I1107 23:21:29.254735  103523 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-542158
	
	I1107 23:21:29.254813  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:21:29.271854  103523 main.go:141] libmachine: Using SSH client type: native
	I1107 23:21:29.272312  103523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1107 23:21:29.272337  103523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-542158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-542158/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-542158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:21:29.387722  103523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:21:29.387748  103523 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9432/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9432/.minikube}
	I1107 23:21:29.387784  103523 ubuntu.go:177] setting up certificates
	I1107 23:21:29.387796  103523 provision.go:83] configureAuth start
	I1107 23:21:29.387860  103523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-542158
	I1107 23:21:29.403937  103523 provision.go:138] copyHostCerts
	I1107 23:21:29.403978  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem
	I1107 23:21:29.404007  103523 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem, removing ...
	I1107 23:21:29.404012  103523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem
	I1107 23:21:29.404080  103523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem (1078 bytes)
	I1107 23:21:29.404183  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem
	I1107 23:21:29.404217  103523 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem, removing ...
	I1107 23:21:29.404228  103523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem
	I1107 23:21:29.404272  103523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem (1123 bytes)
	I1107 23:21:29.404352  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem
	I1107 23:21:29.404374  103523 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem, removing ...
	I1107 23:21:29.404383  103523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem
	I1107 23:21:29.404418  103523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem (1675 bytes)
	I1107 23:21:29.404495  103523 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem org=jenkins.multinode-542158 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-542158]
	I1107 23:21:29.522498  103523 provision.go:172] copyRemoteCerts
	I1107 23:21:29.522549  103523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:21:29.522609  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:21:29.539489  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa Username:docker}
	I1107 23:21:29.624091  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:21:29.624163  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:21:29.645215  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:21:29.645264  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1107 23:21:29.666465  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:21:29.666528  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 23:21:29.687611  103523 provision.go:86] duration metric: configureAuth took 299.80273ms
	I1107 23:21:29.687645  103523 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:21:29.687862  103523 config.go:182] Loaded profile config "multinode-542158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:21:29.687968  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:21:29.705361  103523 main.go:141] libmachine: Using SSH client type: native
	I1107 23:21:29.705729  103523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I1107 23:21:29.705754  103523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:21:29.905549  103523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:21:29.905572  103523 machine.go:91] provisioned docker machine in 892.872367ms
	I1107 23:21:29.905591  103523 client.go:171] LocalClient.Create took 7.496223391s
	I1107 23:21:29.905614  103523 start.go:167] duration metric: libmachine.API.Create for "multinode-542158" took 7.496291795s
	I1107 23:21:29.905623  103523 start.go:300] post-start starting for "multinode-542158" (driver="docker")
	I1107 23:21:29.905634  103523 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:21:29.905694  103523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:21:29.905744  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:21:29.922905  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa Username:docker}
	I1107 23:21:30.008355  103523 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:21:30.011311  103523 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1107 23:21:30.011334  103523 command_runner.go:130] > NAME="Ubuntu"
	I1107 23:21:30.011343  103523 command_runner.go:130] > VERSION_ID="22.04"
	I1107 23:21:30.011351  103523 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1107 23:21:30.011364  103523 command_runner.go:130] > VERSION_CODENAME=jammy
	I1107 23:21:30.011370  103523 command_runner.go:130] > ID=ubuntu
	I1107 23:21:30.011377  103523 command_runner.go:130] > ID_LIKE=debian
	I1107 23:21:30.011389  103523 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1107 23:21:30.011398  103523 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1107 23:21:30.011411  103523 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1107 23:21:30.011419  103523 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1107 23:21:30.011424  103523 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1107 23:21:30.011462  103523 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:21:30.011484  103523 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:21:30.011493  103523 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:21:30.011501  103523 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1107 23:21:30.011509  103523 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/addons for local assets ...
	I1107 23:21:30.011555  103523 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/files for local assets ...
	I1107 23:21:30.011617  103523 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem -> 162112.pem in /etc/ssl/certs
	I1107 23:21:30.011628  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem -> /etc/ssl/certs/162112.pem
	I1107 23:21:30.011723  103523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:21:30.019126  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem --> /etc/ssl/certs/162112.pem (1708 bytes)
	I1107 23:21:30.040638  103523 start.go:303] post-start completed in 135.002819ms
	I1107 23:21:30.040963  103523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-542158
	I1107 23:21:30.057057  103523 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/config.json ...
	I1107 23:21:30.057332  103523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:21:30.057382  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:21:30.073983  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa Username:docker}
	I1107 23:21:30.156214  103523 command_runner.go:130] > 27%!
	(MISSING)I1107 23:21:30.156371  103523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:21:30.160516  103523 command_runner.go:130] > 215G
	I1107 23:21:30.160542  103523 start.go:128] duration metric: createHost completed in 7.75378691s
	I1107 23:21:30.160552  103523 start.go:83] releasing machines lock for "multinode-542158", held for 7.753889907s
	I1107 23:21:30.160617  103523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-542158
	I1107 23:21:30.177049  103523 ssh_runner.go:195] Run: cat /version.json
	I1107 23:21:30.177115  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:21:30.177123  103523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:21:30.177180  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:21:30.192920  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa Username:docker}
	I1107 23:21:30.193448  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa Username:docker}
	I1107 23:21:30.275455  103523 command_runner.go:130] > {"iso_version": "v1.32.0-1698920115-17545", "kicbase_version": "v0.0.42", "minikube_version": "v1.32.0", "commit": "adec9b238c91ffe56105b349a612d102f1601cd2"}
	I1107 23:21:30.364442  103523 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 23:21:30.366669  103523 ssh_runner.go:195] Run: systemctl --version
	I1107 23:21:30.370972  103523 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1107 23:21:30.371011  103523 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1107 23:21:30.371095  103523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:21:30.508771  103523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:21:30.512695  103523 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1107 23:21:30.512716  103523 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1107 23:21:30.512722  103523 command_runner.go:130] > Device: 37h/55d	Inode: 556991      Links: 1
	I1107 23:21:30.512733  103523 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:21:30.512745  103523 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1107 23:21:30.512755  103523 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1107 23:21:30.512761  103523 command_runner.go:130] > Change: 2023-11-07 23:01:51.763999730 +0000
	I1107 23:21:30.512768  103523 command_runner.go:130] >  Birth: 2023-11-07 23:01:51.763999730 +0000
	I1107 23:21:30.512896  103523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:21:30.530646  103523 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:21:30.530737  103523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:21:30.556835  103523 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1107 23:21:30.556866  103523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1107 23:21:30.556872  103523 start.go:472] detecting cgroup driver to use...
	I1107 23:21:30.556899  103523 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:21:30.556933  103523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:21:30.570110  103523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:21:30.579958  103523 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:21:30.580009  103523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:21:30.591821  103523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:21:30.604382  103523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:21:30.677062  103523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:21:30.690628  103523 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1107 23:21:30.753463  103523 docker.go:219] disabling docker service ...
	I1107 23:21:30.753527  103523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:21:30.771583  103523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:21:30.782187  103523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:21:30.793338  103523 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1107 23:21:30.861187  103523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:21:30.871827  103523 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1107 23:21:30.942355  103523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:21:30.952878  103523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:21:30.967751  103523 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1107 23:21:30.968878  103523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:21:30.968936  103523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:21:30.978079  103523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:21:30.978159  103523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:21:30.987371  103523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:21:30.996555  103523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:21:31.005525  103523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:21:31.014200  103523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:21:31.022148  103523 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1107 23:21:31.022227  103523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:21:31.030089  103523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:21:31.100313  103523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:21:31.197847  103523 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:21:31.197914  103523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:21:31.201276  103523 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1107 23:21:31.201298  103523 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 23:21:31.201304  103523 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1107 23:21:31.201311  103523 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:21:31.201316  103523 command_runner.go:130] > Access: 2023-11-07 23:21:31.184159374 +0000
	I1107 23:21:31.201322  103523 command_runner.go:130] > Modify: 2023-11-07 23:21:31.184159374 +0000
	I1107 23:21:31.201326  103523 command_runner.go:130] > Change: 2023-11-07 23:21:31.184159374 +0000
	I1107 23:21:31.201330  103523 command_runner.go:130] >  Birth: -
	I1107 23:21:31.201366  103523 start.go:540] Will wait 60s for crictl version
	I1107 23:21:31.201419  103523 ssh_runner.go:195] Run: which crictl
	I1107 23:21:31.204375  103523 command_runner.go:130] > /usr/bin/crictl
	I1107 23:21:31.204456  103523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:21:31.237325  103523 command_runner.go:130] > Version:  0.1.0
	I1107 23:21:31.237345  103523 command_runner.go:130] > RuntimeName:  cri-o
	I1107 23:21:31.237350  103523 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1107 23:21:31.237355  103523 command_runner.go:130] > RuntimeApiVersion:  v1
	I1107 23:21:31.237370  103523 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1107 23:21:31.237428  103523 ssh_runner.go:195] Run: crio --version
	I1107 23:21:31.269081  103523 command_runner.go:130] > crio version 1.24.6
	I1107 23:21:31.269106  103523 command_runner.go:130] > Version:          1.24.6
	I1107 23:21:31.269117  103523 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1107 23:21:31.269125  103523 command_runner.go:130] > GitTreeState:     clean
	I1107 23:21:31.269135  103523 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1107 23:21:31.269142  103523 command_runner.go:130] > GoVersion:        go1.18.2
	I1107 23:21:31.269150  103523 command_runner.go:130] > Compiler:         gc
	I1107 23:21:31.269161  103523 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:21:31.269174  103523 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:21:31.269189  103523 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:21:31.269208  103523 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:21:31.269223  103523 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:21:31.270689  103523 ssh_runner.go:195] Run: crio --version
	I1107 23:21:31.302460  103523 command_runner.go:130] > crio version 1.24.6
	I1107 23:21:31.302484  103523 command_runner.go:130] > Version:          1.24.6
	I1107 23:21:31.302491  103523 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1107 23:21:31.302496  103523 command_runner.go:130] > GitTreeState:     clean
	I1107 23:21:31.302502  103523 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1107 23:21:31.302506  103523 command_runner.go:130] > GoVersion:        go1.18.2
	I1107 23:21:31.302511  103523 command_runner.go:130] > Compiler:         gc
	I1107 23:21:31.302518  103523 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:21:31.302526  103523 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:21:31.302543  103523 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:21:31.302553  103523 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:21:31.302563  103523 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:21:31.306859  103523 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1107 23:21:31.308436  103523 cli_runner.go:164] Run: docker network inspect multinode-542158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:21:31.324530  103523 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1107 23:21:31.327980  103523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:21:31.337942  103523 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:21:31.338011  103523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:21:31.390311  103523 command_runner.go:130] > {
	I1107 23:21:31.390330  103523 command_runner.go:130] >   "images": [
	I1107 23:21:31.390337  103523 command_runner.go:130] >     {
	I1107 23:21:31.390344  103523 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1107 23:21:31.390349  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.390355  103523 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1107 23:21:31.390360  103523 command_runner.go:130] >       ],
	I1107 23:21:31.390366  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.390379  103523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1107 23:21:31.390395  103523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1107 23:21:31.390401  103523 command_runner.go:130] >       ],
	I1107 23:21:31.390409  103523 command_runner.go:130] >       "size": "65258016",
	I1107 23:21:31.390413  103523 command_runner.go:130] >       "uid": null,
	I1107 23:21:31.390418  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.390425  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.390429  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.390435  103523 command_runner.go:130] >     },
	I1107 23:21:31.390438  103523 command_runner.go:130] >     {
	I1107 23:21:31.390447  103523 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1107 23:21:31.390453  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.390466  103523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1107 23:21:31.390476  103523 command_runner.go:130] >       ],
	I1107 23:21:31.390487  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.390500  103523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1107 23:21:31.390510  103523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1107 23:21:31.390516  103523 command_runner.go:130] >       ],
	I1107 23:21:31.390524  103523 command_runner.go:130] >       "size": "31470524",
	I1107 23:21:31.390531  103523 command_runner.go:130] >       "uid": null,
	I1107 23:21:31.390535  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.390542  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.390546  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.390555  103523 command_runner.go:130] >     },
	I1107 23:21:31.390565  103523 command_runner.go:130] >     {
	I1107 23:21:31.390576  103523 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1107 23:21:31.390587  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.390598  103523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1107 23:21:31.390604  103523 command_runner.go:130] >       ],
	I1107 23:21:31.390609  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.390620  103523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1107 23:21:31.390631  103523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1107 23:21:31.390637  103523 command_runner.go:130] >       ],
	I1107 23:21:31.390643  103523 command_runner.go:130] >       "size": "53621675",
	I1107 23:21:31.390653  103523 command_runner.go:130] >       "uid": null,
	I1107 23:21:31.390664  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.390672  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.390682  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.390691  103523 command_runner.go:130] >     },
	I1107 23:21:31.390700  103523 command_runner.go:130] >     {
	I1107 23:21:31.390715  103523 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1107 23:21:31.390723  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.390732  103523 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1107 23:21:31.390738  103523 command_runner.go:130] >       ],
	I1107 23:21:31.390745  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.390761  103523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1107 23:21:31.390774  103523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1107 23:21:31.390799  103523 command_runner.go:130] >       ],
	I1107 23:21:31.390814  103523 command_runner.go:130] >       "size": "295456551",
	I1107 23:21:31.390821  103523 command_runner.go:130] >       "uid": {
	I1107 23:21:31.390829  103523 command_runner.go:130] >         "value": "0"
	I1107 23:21:31.390834  103523 command_runner.go:130] >       },
	I1107 23:21:31.390839  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.390850  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.390861  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.390867  103523 command_runner.go:130] >     },
	I1107 23:21:31.390876  103523 command_runner.go:130] >     {
	I1107 23:21:31.390890  103523 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1107 23:21:31.390900  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.390916  103523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1107 23:21:31.390924  103523 command_runner.go:130] >       ],
	I1107 23:21:31.390931  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.390942  103523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1107 23:21:31.390958  103523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1107 23:21:31.390968  103523 command_runner.go:130] >       ],
	I1107 23:21:31.390978  103523 command_runner.go:130] >       "size": "127165392",
	I1107 23:21:31.390991  103523 command_runner.go:130] >       "uid": {
	I1107 23:21:31.391001  103523 command_runner.go:130] >         "value": "0"
	I1107 23:21:31.391010  103523 command_runner.go:130] >       },
	I1107 23:21:31.391017  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.391022  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.391032  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.391042  103523 command_runner.go:130] >     },
	I1107 23:21:31.391048  103523 command_runner.go:130] >     {
	I1107 23:21:31.391063  103523 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1107 23:21:31.391073  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.391082  103523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1107 23:21:31.391090  103523 command_runner.go:130] >       ],
	I1107 23:21:31.391096  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.391111  103523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1107 23:21:31.391125  103523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1107 23:21:31.391134  103523 command_runner.go:130] >       ],
	I1107 23:21:31.391141  103523 command_runner.go:130] >       "size": "123188534",
	I1107 23:21:31.391150  103523 command_runner.go:130] >       "uid": {
	I1107 23:21:31.391166  103523 command_runner.go:130] >         "value": "0"
	I1107 23:21:31.391174  103523 command_runner.go:130] >       },
	I1107 23:21:31.391184  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.391193  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.391202  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.391207  103523 command_runner.go:130] >     },
	I1107 23:21:31.391215  103523 command_runner.go:130] >     {
	I1107 23:21:31.391225  103523 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1107 23:21:31.391236  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.391247  103523 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1107 23:21:31.391256  103523 command_runner.go:130] >       ],
	I1107 23:21:31.391266  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.391279  103523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1107 23:21:31.391293  103523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1107 23:21:31.391302  103523 command_runner.go:130] >       ],
	I1107 23:21:31.391309  103523 command_runner.go:130] >       "size": "74691991",
	I1107 23:21:31.391319  103523 command_runner.go:130] >       "uid": null,
	I1107 23:21:31.391325  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.391338  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.391349  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.391355  103523 command_runner.go:130] >     },
	I1107 23:21:31.391365  103523 command_runner.go:130] >     {
	I1107 23:21:31.391376  103523 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1107 23:21:31.391387  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.391398  103523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1107 23:21:31.391408  103523 command_runner.go:130] >       ],
	I1107 23:21:31.391418  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.391511  103523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1107 23:21:31.391534  103523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1107 23:21:31.391540  103523 command_runner.go:130] >       ],
	I1107 23:21:31.391547  103523 command_runner.go:130] >       "size": "61498678",
	I1107 23:21:31.391556  103523 command_runner.go:130] >       "uid": {
	I1107 23:21:31.391564  103523 command_runner.go:130] >         "value": "0"
	I1107 23:21:31.391572  103523 command_runner.go:130] >       },
	I1107 23:21:31.391579  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.391588  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.391599  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.391607  103523 command_runner.go:130] >     },
	I1107 23:21:31.391614  103523 command_runner.go:130] >     {
	I1107 23:21:31.391626  103523 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1107 23:21:31.391636  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.391643  103523 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1107 23:21:31.391652  103523 command_runner.go:130] >       ],
	I1107 23:21:31.391661  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.391674  103523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1107 23:21:31.391688  103523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1107 23:21:31.391697  103523 command_runner.go:130] >       ],
	I1107 23:21:31.391704  103523 command_runner.go:130] >       "size": "750414",
	I1107 23:21:31.391713  103523 command_runner.go:130] >       "uid": {
	I1107 23:21:31.391722  103523 command_runner.go:130] >         "value": "65535"
	I1107 23:21:31.391728  103523 command_runner.go:130] >       },
	I1107 23:21:31.391738  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.391747  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.391756  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.391777  103523 command_runner.go:130] >     }
	I1107 23:21:31.391792  103523 command_runner.go:130] >   ]
	I1107 23:21:31.391801  103523 command_runner.go:130] > }
	I1107 23:21:31.392332  103523 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:21:31.392349  103523 crio.go:415] Images already preloaded, skipping extraction
	I1107 23:21:31.392387  103523 ssh_runner.go:195] Run: sudo crictl images --output json
	I1107 23:21:31.423625  103523 command_runner.go:130] > {
	I1107 23:21:31.423650  103523 command_runner.go:130] >   "images": [
	I1107 23:21:31.423657  103523 command_runner.go:130] >     {
	I1107 23:21:31.423665  103523 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1107 23:21:31.423670  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.423676  103523 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1107 23:21:31.423680  103523 command_runner.go:130] >       ],
	I1107 23:21:31.423684  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.423692  103523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1107 23:21:31.423702  103523 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1107 23:21:31.423708  103523 command_runner.go:130] >       ],
	I1107 23:21:31.423713  103523 command_runner.go:130] >       "size": "65258016",
	I1107 23:21:31.423719  103523 command_runner.go:130] >       "uid": null,
	I1107 23:21:31.423728  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.423735  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.423743  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.423746  103523 command_runner.go:130] >     },
	I1107 23:21:31.423752  103523 command_runner.go:130] >     {
	I1107 23:21:31.423775  103523 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1107 23:21:31.423787  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.423795  103523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1107 23:21:31.423801  103523 command_runner.go:130] >       ],
	I1107 23:21:31.423805  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.423812  103523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1107 23:21:31.423824  103523 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1107 23:21:31.423828  103523 command_runner.go:130] >       ],
	I1107 23:21:31.423835  103523 command_runner.go:130] >       "size": "31470524",
	I1107 23:21:31.423839  103523 command_runner.go:130] >       "uid": null,
	I1107 23:21:31.423843  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.423847  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.423851  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.423857  103523 command_runner.go:130] >     },
	I1107 23:21:31.423860  103523 command_runner.go:130] >     {
	I1107 23:21:31.423867  103523 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1107 23:21:31.423874  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.423882  103523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1107 23:21:31.423897  103523 command_runner.go:130] >       ],
	I1107 23:21:31.423901  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.423907  103523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1107 23:21:31.423917  103523 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1107 23:21:31.423921  103523 command_runner.go:130] >       ],
	I1107 23:21:31.423928  103523 command_runner.go:130] >       "size": "53621675",
	I1107 23:21:31.423932  103523 command_runner.go:130] >       "uid": null,
	I1107 23:21:31.423939  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.423943  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.423949  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.423953  103523 command_runner.go:130] >     },
	I1107 23:21:31.423959  103523 command_runner.go:130] >     {
	I1107 23:21:31.423965  103523 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1107 23:21:31.423974  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.423982  103523 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1107 23:21:31.423992  103523 command_runner.go:130] >       ],
	I1107 23:21:31.423999  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.424005  103523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1107 23:21:31.424015  103523 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1107 23:21:31.424027  103523 command_runner.go:130] >       ],
	I1107 23:21:31.424034  103523 command_runner.go:130] >       "size": "295456551",
	I1107 23:21:31.424038  103523 command_runner.go:130] >       "uid": {
	I1107 23:21:31.424043  103523 command_runner.go:130] >         "value": "0"
	I1107 23:21:31.424047  103523 command_runner.go:130] >       },
	I1107 23:21:31.424053  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.424058  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.424064  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.424069  103523 command_runner.go:130] >     },
	I1107 23:21:31.424074  103523 command_runner.go:130] >     {
	I1107 23:21:31.424080  103523 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1107 23:21:31.424087  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.424095  103523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1107 23:21:31.424102  103523 command_runner.go:130] >       ],
	I1107 23:21:31.424106  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.424113  103523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1107 23:21:31.424123  103523 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1107 23:21:31.424128  103523 command_runner.go:130] >       ],
	I1107 23:21:31.424133  103523 command_runner.go:130] >       "size": "127165392",
	I1107 23:21:31.424139  103523 command_runner.go:130] >       "uid": {
	I1107 23:21:31.424143  103523 command_runner.go:130] >         "value": "0"
	I1107 23:21:31.424155  103523 command_runner.go:130] >       },
	I1107 23:21:31.424160  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.424166  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.424170  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.424176  103523 command_runner.go:130] >     },
	I1107 23:21:31.424180  103523 command_runner.go:130] >     {
	I1107 23:21:31.424188  103523 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1107 23:21:31.424195  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.424201  103523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1107 23:21:31.424209  103523 command_runner.go:130] >       ],
	I1107 23:21:31.424216  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.424224  103523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1107 23:21:31.424234  103523 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1107 23:21:31.424240  103523 command_runner.go:130] >       ],
	I1107 23:21:31.424244  103523 command_runner.go:130] >       "size": "123188534",
	I1107 23:21:31.424250  103523 command_runner.go:130] >       "uid": {
	I1107 23:21:31.424255  103523 command_runner.go:130] >         "value": "0"
	I1107 23:21:31.424261  103523 command_runner.go:130] >       },
	I1107 23:21:31.424265  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.424272  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.424276  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.424282  103523 command_runner.go:130] >     },
	I1107 23:21:31.424286  103523 command_runner.go:130] >     {
	I1107 23:21:31.424292  103523 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1107 23:21:31.424298  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.424304  103523 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1107 23:21:31.424309  103523 command_runner.go:130] >       ],
	I1107 23:21:31.424315  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.424325  103523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1107 23:21:31.424334  103523 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1107 23:21:31.424348  103523 command_runner.go:130] >       ],
	I1107 23:21:31.424355  103523 command_runner.go:130] >       "size": "74691991",
	I1107 23:21:31.424359  103523 command_runner.go:130] >       "uid": null,
	I1107 23:21:31.424363  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.424367  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.424371  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.424374  103523 command_runner.go:130] >     },
	I1107 23:21:31.424380  103523 command_runner.go:130] >     {
	I1107 23:21:31.424387  103523 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1107 23:21:31.424393  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.424398  103523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1107 23:21:31.424404  103523 command_runner.go:130] >       ],
	I1107 23:21:31.424408  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.424430  103523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1107 23:21:31.424441  103523 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1107 23:21:31.424451  103523 command_runner.go:130] >       ],
	I1107 23:21:31.424455  103523 command_runner.go:130] >       "size": "61498678",
	I1107 23:21:31.424459  103523 command_runner.go:130] >       "uid": {
	I1107 23:21:31.424463  103523 command_runner.go:130] >         "value": "0"
	I1107 23:21:31.424469  103523 command_runner.go:130] >       },
	I1107 23:21:31.424473  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.424480  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.424484  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.424490  103523 command_runner.go:130] >     },
	I1107 23:21:31.424494  103523 command_runner.go:130] >     {
	I1107 23:21:31.424502  103523 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1107 23:21:31.424510  103523 command_runner.go:130] >       "repoTags": [
	I1107 23:21:31.424515  103523 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1107 23:21:31.424521  103523 command_runner.go:130] >       ],
	I1107 23:21:31.424528  103523 command_runner.go:130] >       "repoDigests": [
	I1107 23:21:31.424538  103523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1107 23:21:31.424547  103523 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1107 23:21:31.424551  103523 command_runner.go:130] >       ],
	I1107 23:21:31.424557  103523 command_runner.go:130] >       "size": "750414",
	I1107 23:21:31.424564  103523 command_runner.go:130] >       "uid": {
	I1107 23:21:31.424568  103523 command_runner.go:130] >         "value": "65535"
	I1107 23:21:31.424574  103523 command_runner.go:130] >       },
	I1107 23:21:31.424579  103523 command_runner.go:130] >       "username": "",
	I1107 23:21:31.424585  103523 command_runner.go:130] >       "spec": null,
	I1107 23:21:31.424589  103523 command_runner.go:130] >       "pinned": false
	I1107 23:21:31.424595  103523 command_runner.go:130] >     }
	I1107 23:21:31.424599  103523 command_runner.go:130] >   ]
	I1107 23:21:31.424604  103523 command_runner.go:130] > }
	I1107 23:21:31.424708  103523 crio.go:496] all images are preloaded for cri-o runtime.
	I1107 23:21:31.424718  103523 cache_images.go:84] Images are preloaded, skipping loading
	I1107 23:21:31.424779  103523 ssh_runner.go:195] Run: crio config
	I1107 23:21:31.459677  103523 command_runner.go:130] ! time="2023-11-07 23:21:31.459295336Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1107 23:21:31.459709  103523 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1107 23:21:31.464270  103523 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1107 23:21:31.464294  103523 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1107 23:21:31.464301  103523 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1107 23:21:31.464305  103523 command_runner.go:130] > #
	I1107 23:21:31.464332  103523 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1107 23:21:31.464343  103523 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1107 23:21:31.464360  103523 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1107 23:21:31.464377  103523 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1107 23:21:31.464386  103523 command_runner.go:130] > # reload'.
	I1107 23:21:31.464398  103523 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1107 23:21:31.464411  103523 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1107 23:21:31.464424  103523 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1107 23:21:31.464432  103523 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1107 23:21:31.464437  103523 command_runner.go:130] > [crio]
	I1107 23:21:31.464443  103523 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1107 23:21:31.464450  103523 command_runner.go:130] > # containers images, in this directory.
	I1107 23:21:31.464461  103523 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1107 23:21:31.464470  103523 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1107 23:21:31.464479  103523 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1107 23:21:31.464492  103523 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1107 23:21:31.464506  103523 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1107 23:21:31.464517  103523 command_runner.go:130] > # storage_driver = "vfs"
	I1107 23:21:31.464525  103523 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1107 23:21:31.464533  103523 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1107 23:21:31.464542  103523 command_runner.go:130] > # storage_option = [
	I1107 23:21:31.464548  103523 command_runner.go:130] > # ]
	I1107 23:21:31.464554  103523 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1107 23:21:31.464563  103523 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1107 23:21:31.464570  103523 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1107 23:21:31.464576  103523 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1107 23:21:31.464584  103523 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1107 23:21:31.464589  103523 command_runner.go:130] > # always happen on a node reboot
	I1107 23:21:31.464597  103523 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1107 23:21:31.464602  103523 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1107 23:21:31.464610  103523 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1107 23:21:31.464624  103523 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1107 23:21:31.464632  103523 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1107 23:21:31.464640  103523 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1107 23:21:31.464655  103523 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1107 23:21:31.464665  103523 command_runner.go:130] > # internal_wipe = true
	I1107 23:21:31.464674  103523 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1107 23:21:31.464683  103523 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1107 23:21:31.464699  103523 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1107 23:21:31.464706  103523 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1107 23:21:31.464714  103523 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1107 23:21:31.464718  103523 command_runner.go:130] > [crio.api]
	I1107 23:21:31.464726  103523 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1107 23:21:31.464731  103523 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1107 23:21:31.464739  103523 command_runner.go:130] > # IP address on which the stream server will listen.
	I1107 23:21:31.464743  103523 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1107 23:21:31.464752  103523 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1107 23:21:31.464757  103523 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1107 23:21:31.464764  103523 command_runner.go:130] > # stream_port = "0"
	I1107 23:21:31.464769  103523 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1107 23:21:31.464776  103523 command_runner.go:130] > # stream_enable_tls = false
	I1107 23:21:31.464782  103523 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1107 23:21:31.464788  103523 command_runner.go:130] > # stream_idle_timeout = ""
	I1107 23:21:31.464794  103523 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1107 23:21:31.464802  103523 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1107 23:21:31.464808  103523 command_runner.go:130] > # minutes.
	I1107 23:21:31.464815  103523 command_runner.go:130] > # stream_tls_cert = ""
	I1107 23:21:31.464824  103523 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1107 23:21:31.464832  103523 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1107 23:21:31.464838  103523 command_runner.go:130] > # stream_tls_key = ""
	I1107 23:21:31.464844  103523 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1107 23:21:31.464852  103523 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1107 23:21:31.464860  103523 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1107 23:21:31.464866  103523 command_runner.go:130] > # stream_tls_ca = ""
	I1107 23:21:31.464875  103523 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:21:31.464882  103523 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1107 23:21:31.464889  103523 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:21:31.464896  103523 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1107 23:21:31.464922  103523 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1107 23:21:31.464931  103523 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1107 23:21:31.464935  103523 command_runner.go:130] > [crio.runtime]
	I1107 23:21:31.464941  103523 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1107 23:21:31.464946  103523 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1107 23:21:31.464953  103523 command_runner.go:130] > # "nofile=1024:2048"
	I1107 23:21:31.464962  103523 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1107 23:21:31.464968  103523 command_runner.go:130] > # default_ulimits = [
	I1107 23:21:31.464972  103523 command_runner.go:130] > # ]
	I1107 23:21:31.464982  103523 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1107 23:21:31.464989  103523 command_runner.go:130] > # no_pivot = false
	I1107 23:21:31.464995  103523 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1107 23:21:31.465003  103523 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1107 23:21:31.465010  103523 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1107 23:21:31.465016  103523 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1107 23:21:31.465023  103523 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1107 23:21:31.465029  103523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:21:31.465036  103523 command_runner.go:130] > # conmon = ""
	I1107 23:21:31.465040  103523 command_runner.go:130] > # Cgroup setting for conmon
	I1107 23:21:31.465048  103523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1107 23:21:31.465055  103523 command_runner.go:130] > conmon_cgroup = "pod"
	I1107 23:21:31.465061  103523 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1107 23:21:31.465068  103523 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1107 23:21:31.465075  103523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:21:31.465083  103523 command_runner.go:130] > # conmon_env = [
	I1107 23:21:31.465088  103523 command_runner.go:130] > # ]
	I1107 23:21:31.465093  103523 command_runner.go:130] > # Additional environment variables to set for all the
	I1107 23:21:31.465101  103523 command_runner.go:130] > # containers. These are overridden if set in the
	I1107 23:21:31.465109  103523 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1107 23:21:31.465113  103523 command_runner.go:130] > # default_env = [
	I1107 23:21:31.465119  103523 command_runner.go:130] > # ]
	I1107 23:21:31.465125  103523 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1107 23:21:31.465131  103523 command_runner.go:130] > # selinux = false
	I1107 23:21:31.465137  103523 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1107 23:21:31.465146  103523 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1107 23:21:31.465155  103523 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1107 23:21:31.465161  103523 command_runner.go:130] > # seccomp_profile = ""
	I1107 23:21:31.465167  103523 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1107 23:21:31.465175  103523 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1107 23:21:31.465183  103523 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1107 23:21:31.465190  103523 command_runner.go:130] > # which might increase security.
	I1107 23:21:31.465195  103523 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1107 23:21:31.465206  103523 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1107 23:21:31.465214  103523 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1107 23:21:31.465222  103523 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1107 23:21:31.465231  103523 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1107 23:21:31.465236  103523 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:21:31.465243  103523 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1107 23:21:31.465249  103523 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1107 23:21:31.465256  103523 command_runner.go:130] > # the cgroup blockio controller.
	I1107 23:21:31.465261  103523 command_runner.go:130] > # blockio_config_file = ""
	I1107 23:21:31.465272  103523 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1107 23:21:31.465276  103523 command_runner.go:130] > # irqbalance daemon.
	I1107 23:21:31.465282  103523 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1107 23:21:31.465291  103523 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1107 23:21:31.465298  103523 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:21:31.465304  103523 command_runner.go:130] > # rdt_config_file = ""
	I1107 23:21:31.465310  103523 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1107 23:21:31.465316  103523 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1107 23:21:31.465322  103523 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1107 23:21:31.465334  103523 command_runner.go:130] > # separate_pull_cgroup = ""
	I1107 23:21:31.465343  103523 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1107 23:21:31.465349  103523 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1107 23:21:31.465355  103523 command_runner.go:130] > # will be added.
	I1107 23:21:31.465361  103523 command_runner.go:130] > # default_capabilities = [
	I1107 23:21:31.465369  103523 command_runner.go:130] > # 	"CHOWN",
	I1107 23:21:31.465373  103523 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1107 23:21:31.465379  103523 command_runner.go:130] > # 	"FSETID",
	I1107 23:21:31.465385  103523 command_runner.go:130] > # 	"FOWNER",
	I1107 23:21:31.465393  103523 command_runner.go:130] > # 	"SETGID",
	I1107 23:21:31.465397  103523 command_runner.go:130] > # 	"SETUID",
	I1107 23:21:31.465403  103523 command_runner.go:130] > # 	"SETPCAP",
	I1107 23:21:31.465408  103523 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1107 23:21:31.465413  103523 command_runner.go:130] > # 	"KILL",
	I1107 23:21:31.465417  103523 command_runner.go:130] > # ]
	I1107 23:21:31.465429  103523 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1107 23:21:31.465438  103523 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1107 23:21:31.465446  103523 command_runner.go:130] > # add_inheritable_capabilities = true
	I1107 23:21:31.465455  103523 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1107 23:21:31.465463  103523 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:21:31.465470  103523 command_runner.go:130] > # default_sysctls = [
	I1107 23:21:31.465474  103523 command_runner.go:130] > # ]
	I1107 23:21:31.465481  103523 command_runner.go:130] > # List of devices on the host that a
	I1107 23:21:31.465487  103523 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1107 23:21:31.465494  103523 command_runner.go:130] > # allowed_devices = [
	I1107 23:21:31.465498  103523 command_runner.go:130] > # 	"/dev/fuse",
	I1107 23:21:31.465504  103523 command_runner.go:130] > # ]
	I1107 23:21:31.465509  103523 command_runner.go:130] > # List of additional devices. specified as
	I1107 23:21:31.465546  103523 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1107 23:21:31.465555  103523 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1107 23:21:31.465561  103523 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:21:31.465565  103523 command_runner.go:130] > # additional_devices = [
	I1107 23:21:31.465571  103523 command_runner.go:130] > # ]
	I1107 23:21:31.465577  103523 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1107 23:21:31.465585  103523 command_runner.go:130] > # cdi_spec_dirs = [
	I1107 23:21:31.465591  103523 command_runner.go:130] > # 	"/etc/cdi",
	I1107 23:21:31.465598  103523 command_runner.go:130] > # 	"/var/run/cdi",
	I1107 23:21:31.465603  103523 command_runner.go:130] > # ]
	I1107 23:21:31.465609  103523 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1107 23:21:31.465616  103523 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1107 23:21:31.465622  103523 command_runner.go:130] > # Defaults to false.
	I1107 23:21:31.465627  103523 command_runner.go:130] > # device_ownership_from_security_context = false
	I1107 23:21:31.465636  103523 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1107 23:21:31.465642  103523 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1107 23:21:31.465670  103523 command_runner.go:130] > # hooks_dir = [
	I1107 23:21:31.465679  103523 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1107 23:21:31.465686  103523 command_runner.go:130] > # ]
	I1107 23:21:31.465692  103523 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1107 23:21:31.465700  103523 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1107 23:21:31.465707  103523 command_runner.go:130] > # its default mounts from the following two files:
	I1107 23:21:31.465713  103523 command_runner.go:130] > #
	I1107 23:21:31.465720  103523 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1107 23:21:31.465728  103523 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1107 23:21:31.465736  103523 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1107 23:21:31.465743  103523 command_runner.go:130] > #
	I1107 23:21:31.465752  103523 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1107 23:21:31.465760  103523 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1107 23:21:31.465768  103523 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1107 23:21:31.465774  103523 command_runner.go:130] > #      only add mounts it finds in this file.
	I1107 23:21:31.465779  103523 command_runner.go:130] > #
	I1107 23:21:31.465784  103523 command_runner.go:130] > # default_mounts_file = ""
	I1107 23:21:31.465789  103523 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1107 23:21:31.465797  103523 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1107 23:21:31.465804  103523 command_runner.go:130] > # pids_limit = 0
	I1107 23:21:31.465810  103523 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1107 23:21:31.465818  103523 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1107 23:21:31.465826  103523 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1107 23:21:31.465836  103523 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1107 23:21:31.465846  103523 command_runner.go:130] > # log_size_max = -1
	I1107 23:21:31.465855  103523 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1107 23:21:31.465860  103523 command_runner.go:130] > # log_to_journald = false
	I1107 23:21:31.465868  103523 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1107 23:21:31.465877  103523 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1107 23:21:31.465885  103523 command_runner.go:130] > # Path to directory for container attach sockets.
	I1107 23:21:31.465891  103523 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1107 23:21:31.465899  103523 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1107 23:21:31.465905  103523 command_runner.go:130] > # bind_mount_prefix = ""
	I1107 23:21:31.465911  103523 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1107 23:21:31.465917  103523 command_runner.go:130] > # read_only = false
	I1107 23:21:31.465923  103523 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1107 23:21:31.465932  103523 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1107 23:21:31.465938  103523 command_runner.go:130] > # live configuration reload.
	I1107 23:21:31.465942  103523 command_runner.go:130] > # log_level = "info"
	I1107 23:21:31.465948  103523 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1107 23:21:31.465955  103523 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:21:31.465959  103523 command_runner.go:130] > # log_filter = ""
	I1107 23:21:31.465967  103523 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1107 23:21:31.465975  103523 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1107 23:21:31.465981  103523 command_runner.go:130] > # separated by comma.
	I1107 23:21:31.465986  103523 command_runner.go:130] > # uid_mappings = ""
	I1107 23:21:31.465996  103523 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1107 23:21:31.466004  103523 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1107 23:21:31.466011  103523 command_runner.go:130] > # separated by comma.
	I1107 23:21:31.466015  103523 command_runner.go:130] > # gid_mappings = ""
	I1107 23:21:31.466021  103523 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1107 23:21:31.466030  103523 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:21:31.466036  103523 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:21:31.466042  103523 command_runner.go:130] > # minimum_mappable_uid = -1
	I1107 23:21:31.466049  103523 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1107 23:21:31.466057  103523 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:21:31.466065  103523 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:21:31.466070  103523 command_runner.go:130] > # minimum_mappable_gid = -1
	I1107 23:21:31.466079  103523 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1107 23:21:31.466087  103523 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1107 23:21:31.466095  103523 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1107 23:21:31.466101  103523 command_runner.go:130] > # ctr_stop_timeout = 30
	I1107 23:21:31.466107  103523 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1107 23:21:31.466122  103523 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1107 23:21:31.466131  103523 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1107 23:21:31.466136  103523 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1107 23:21:31.466143  103523 command_runner.go:130] > # drop_infra_ctr = true
	I1107 23:21:31.466151  103523 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1107 23:21:31.466159  103523 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1107 23:21:31.466166  103523 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1107 23:21:31.466173  103523 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1107 23:21:31.466179  103523 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1107 23:21:31.466186  103523 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1107 23:21:31.466191  103523 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1107 23:21:31.466200  103523 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1107 23:21:31.466206  103523 command_runner.go:130] > # pinns_path = ""
	I1107 23:21:31.466213  103523 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1107 23:21:31.466221  103523 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1107 23:21:31.466227  103523 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1107 23:21:31.466233  103523 command_runner.go:130] > # default_runtime = "runc"
	I1107 23:21:31.466238  103523 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1107 23:21:31.466248  103523 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1107 23:21:31.466261  103523 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1107 23:21:31.466268  103523 command_runner.go:130] > # creation as a file is not desired either.
	I1107 23:21:31.466278  103523 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1107 23:21:31.466286  103523 command_runner.go:130] > # the hostname is being managed dynamically.
	I1107 23:21:31.466290  103523 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1107 23:21:31.466296  103523 command_runner.go:130] > # ]
	I1107 23:21:31.466303  103523 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1107 23:21:31.466311  103523 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1107 23:21:31.466318  103523 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1107 23:21:31.466326  103523 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1107 23:21:31.466331  103523 command_runner.go:130] > #
	I1107 23:21:31.466336  103523 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1107 23:21:31.466343  103523 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1107 23:21:31.466348  103523 command_runner.go:130] > #  runtime_type = "oci"
	I1107 23:21:31.466355  103523 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1107 23:21:31.466360  103523 command_runner.go:130] > #  privileged_without_host_devices = false
	I1107 23:21:31.466366  103523 command_runner.go:130] > #  allowed_annotations = []
	I1107 23:21:31.466370  103523 command_runner.go:130] > # Where:
	I1107 23:21:31.466380  103523 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1107 23:21:31.466389  103523 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1107 23:21:31.466397  103523 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1107 23:21:31.466403  103523 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1107 23:21:31.466409  103523 command_runner.go:130] > #   in $PATH.
	I1107 23:21:31.466416  103523 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1107 23:21:31.466423  103523 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1107 23:21:31.466429  103523 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1107 23:21:31.466435  103523 command_runner.go:130] > #   state.
	I1107 23:21:31.466441  103523 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1107 23:21:31.466449  103523 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1107 23:21:31.466457  103523 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1107 23:21:31.466465  103523 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1107 23:21:31.466471  103523 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1107 23:21:31.466479  103523 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1107 23:21:31.466486  103523 command_runner.go:130] > #   The currently recognized values are:
	I1107 23:21:31.466492  103523 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1107 23:21:31.466505  103523 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1107 23:21:31.466516  103523 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1107 23:21:31.466524  103523 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1107 23:21:31.466533  103523 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1107 23:21:31.466542  103523 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1107 23:21:31.466550  103523 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1107 23:21:31.466559  103523 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1107 23:21:31.466564  103523 command_runner.go:130] > #   should be moved to the container's cgroup
	I1107 23:21:31.466571  103523 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1107 23:21:31.466576  103523 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1107 23:21:31.466582  103523 command_runner.go:130] > runtime_type = "oci"
	I1107 23:21:31.466587  103523 command_runner.go:130] > runtime_root = "/run/runc"
	I1107 23:21:31.466593  103523 command_runner.go:130] > runtime_config_path = ""
	I1107 23:21:31.466597  103523 command_runner.go:130] > monitor_path = ""
	I1107 23:21:31.466604  103523 command_runner.go:130] > monitor_cgroup = ""
	I1107 23:21:31.466608  103523 command_runner.go:130] > monitor_exec_cgroup = ""
	I1107 23:21:31.466662  103523 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1107 23:21:31.466670  103523 command_runner.go:130] > # running containers
	I1107 23:21:31.466674  103523 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1107 23:21:31.466682  103523 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1107 23:21:31.466691  103523 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1107 23:21:31.466698  103523 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1107 23:21:31.466706  103523 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1107 23:21:31.466711  103523 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1107 23:21:31.466717  103523 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1107 23:21:31.466723  103523 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1107 23:21:31.466731  103523 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1107 23:21:31.466738  103523 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1107 23:21:31.466745  103523 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1107 23:21:31.466752  103523 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1107 23:21:31.466759  103523 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1107 23:21:31.466768  103523 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1107 23:21:31.466776  103523 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1107 23:21:31.466784  103523 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1107 23:21:31.466792  103523 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1107 23:21:31.466803  103523 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1107 23:21:31.466811  103523 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1107 23:21:31.466824  103523 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1107 23:21:31.466830  103523 command_runner.go:130] > # Example:
	I1107 23:21:31.466835  103523 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1107 23:21:31.466842  103523 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1107 23:21:31.466849  103523 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1107 23:21:31.466859  103523 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1107 23:21:31.466865  103523 command_runner.go:130] > # cpuset = 0
	I1107 23:21:31.466874  103523 command_runner.go:130] > # cpushares = "0-1"
	I1107 23:21:31.466883  103523 command_runner.go:130] > # Where:
	I1107 23:21:31.466894  103523 command_runner.go:130] > # The workload name is workload-type.
	I1107 23:21:31.466908  103523 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1107 23:21:31.466919  103523 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1107 23:21:31.466930  103523 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1107 23:21:31.466944  103523 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1107 23:21:31.466956  103523 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1107 23:21:31.466965  103523 command_runner.go:130] > # 
	I1107 23:21:31.466978  103523 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1107 23:21:31.466986  103523 command_runner.go:130] > #
	I1107 23:21:31.467000  103523 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1107 23:21:31.467009  103523 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1107 23:21:31.467015  103523 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1107 23:21:31.467023  103523 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1107 23:21:31.467031  103523 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1107 23:21:31.467038  103523 command_runner.go:130] > [crio.image]
	I1107 23:21:31.467044  103523 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1107 23:21:31.467050  103523 command_runner.go:130] > # default_transport = "docker://"
	I1107 23:21:31.467057  103523 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1107 23:21:31.467065  103523 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:21:31.467071  103523 command_runner.go:130] > # global_auth_file = ""
	I1107 23:21:31.467077  103523 command_runner.go:130] > # The image used to instantiate infra containers.
	I1107 23:21:31.467085  103523 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:21:31.467093  103523 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1107 23:21:31.467099  103523 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1107 23:21:31.467107  103523 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:21:31.467115  103523 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:21:31.467119  103523 command_runner.go:130] > # pause_image_auth_file = ""
	I1107 23:21:31.467148  103523 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1107 23:21:31.467165  103523 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1107 23:21:31.467174  103523 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1107 23:21:31.467183  103523 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1107 23:21:31.467187  103523 command_runner.go:130] > # pause_command = "/pause"
	I1107 23:21:31.467196  103523 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1107 23:21:31.467203  103523 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1107 23:21:31.467211  103523 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1107 23:21:31.467219  103523 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1107 23:21:31.467227  103523 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1107 23:21:31.467231  103523 command_runner.go:130] > # signature_policy = ""
	I1107 23:21:31.467245  103523 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1107 23:21:31.467253  103523 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1107 23:21:31.467260  103523 command_runner.go:130] > # changing them here.
	I1107 23:21:31.467264  103523 command_runner.go:130] > # insecure_registries = [
	I1107 23:21:31.467270  103523 command_runner.go:130] > # ]
	I1107 23:21:31.467277  103523 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1107 23:21:31.467284  103523 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1107 23:21:31.467293  103523 command_runner.go:130] > # image_volumes = "mkdir"
	I1107 23:21:31.467300  103523 command_runner.go:130] > # Temporary directory to use for storing big files
	I1107 23:21:31.467314  103523 command_runner.go:130] > # big_files_temporary_dir = ""
	I1107 23:21:31.467320  103523 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1107 23:21:31.467331  103523 command_runner.go:130] > # CNI plugins.
	I1107 23:21:31.467337  103523 command_runner.go:130] > [crio.network]
	I1107 23:21:31.467343  103523 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1107 23:21:31.467350  103523 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1107 23:21:31.467360  103523 command_runner.go:130] > # cni_default_network = ""
	I1107 23:21:31.467369  103523 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1107 23:21:31.467376  103523 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1107 23:21:31.467382  103523 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1107 23:21:31.467394  103523 command_runner.go:130] > # plugin_dirs = [
	I1107 23:21:31.467400  103523 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1107 23:21:31.467403  103523 command_runner.go:130] > # ]
	I1107 23:21:31.467411  103523 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1107 23:21:31.467417  103523 command_runner.go:130] > [crio.metrics]
	I1107 23:21:31.467422  103523 command_runner.go:130] > # Globally enable or disable metrics support.
	I1107 23:21:31.467430  103523 command_runner.go:130] > # enable_metrics = false
	I1107 23:21:31.467438  103523 command_runner.go:130] > # Specify enabled metrics collectors.
	I1107 23:21:31.467443  103523 command_runner.go:130] > # Per default all metrics are enabled.
	I1107 23:21:31.467451  103523 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1107 23:21:31.467462  103523 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1107 23:21:31.467471  103523 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1107 23:21:31.467475  103523 command_runner.go:130] > # metrics_collectors = [
	I1107 23:21:31.467479  103523 command_runner.go:130] > # 	"operations",
	I1107 23:21:31.467486  103523 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1107 23:21:31.467493  103523 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1107 23:21:31.467497  103523 command_runner.go:130] > # 	"operations_errors",
	I1107 23:21:31.467504  103523 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1107 23:21:31.467508  103523 command_runner.go:130] > # 	"image_pulls_by_name",
	I1107 23:21:31.467512  103523 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1107 23:21:31.467519  103523 command_runner.go:130] > # 	"image_pulls_failures",
	I1107 23:21:31.467523  103523 command_runner.go:130] > # 	"image_pulls_successes",
	I1107 23:21:31.467530  103523 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1107 23:21:31.467534  103523 command_runner.go:130] > # 	"image_layer_reuse",
	I1107 23:21:31.467548  103523 command_runner.go:130] > # 	"containers_oom_total",
	I1107 23:21:31.467555  103523 command_runner.go:130] > # 	"containers_oom",
	I1107 23:21:31.467559  103523 command_runner.go:130] > # 	"processes_defunct",
	I1107 23:21:31.467565  103523 command_runner.go:130] > # 	"operations_total",
	I1107 23:21:31.467570  103523 command_runner.go:130] > # 	"operations_latency_seconds",
	I1107 23:21:31.467577  103523 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1107 23:21:31.467581  103523 command_runner.go:130] > # 	"operations_errors_total",
	I1107 23:21:31.467587  103523 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1107 23:21:31.467592  103523 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1107 23:21:31.467599  103523 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1107 23:21:31.467604  103523 command_runner.go:130] > # 	"image_pulls_success_total",
	I1107 23:21:31.467610  103523 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1107 23:21:31.467615  103523 command_runner.go:130] > # 	"containers_oom_count_total",
	I1107 23:21:31.467620  103523 command_runner.go:130] > # ]
	I1107 23:21:31.467625  103523 command_runner.go:130] > # The port on which the metrics server will listen.
	I1107 23:21:31.467631  103523 command_runner.go:130] > # metrics_port = 9090
	I1107 23:21:31.467637  103523 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1107 23:21:31.467643  103523 command_runner.go:130] > # metrics_socket = ""
	I1107 23:21:31.467656  103523 command_runner.go:130] > # The certificate for the secure metrics server.
	I1107 23:21:31.467664  103523 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1107 23:21:31.467672  103523 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1107 23:21:31.467677  103523 command_runner.go:130] > # certificate on any modification event.
	I1107 23:21:31.467683  103523 command_runner.go:130] > # metrics_cert = ""
	I1107 23:21:31.467691  103523 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1107 23:21:31.467699  103523 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1107 23:21:31.467706  103523 command_runner.go:130] > # metrics_key = ""
	I1107 23:21:31.467711  103523 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1107 23:21:31.467718  103523 command_runner.go:130] > [crio.tracing]
	I1107 23:21:31.467724  103523 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1107 23:21:31.467730  103523 command_runner.go:130] > # enable_tracing = false
	I1107 23:21:31.467735  103523 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1107 23:21:31.467742  103523 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1107 23:21:31.467748  103523 command_runner.go:130] > # Number of samples to collect per million spans.
	I1107 23:21:31.467754  103523 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1107 23:21:31.467774  103523 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1107 23:21:31.467783  103523 command_runner.go:130] > [crio.stats]
	I1107 23:21:31.467796  103523 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1107 23:21:31.467806  103523 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1107 23:21:31.467812  103523 command_runner.go:130] > # stats_collection_period = 0
	I1107 23:21:31.467889  103523 cni.go:84] Creating CNI manager for ""
	I1107 23:21:31.467900  103523 cni.go:136] 1 nodes found, recommending kindnet
	I1107 23:21:31.467920  103523 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:21:31.467939  103523 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-542158 NodeName:multinode-542158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:21:31.468060  103523 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-542158"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:21:31.468126  103523 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-542158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-542158 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:21:31.468176  103523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:21:31.476515  103523 command_runner.go:130] > kubeadm
	I1107 23:21:31.476543  103523 command_runner.go:130] > kubectl
	I1107 23:21:31.476551  103523 command_runner.go:130] > kubelet
	I1107 23:21:31.476573  103523 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:21:31.476622  103523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 23:21:31.484253  103523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1107 23:21:31.500002  103523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:21:31.516177  103523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1107 23:21:31.532096  103523 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1107 23:21:31.535237  103523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:21:31.544916  103523 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158 for IP: 192.168.58.2
	I1107 23:21:31.544959  103523 certs.go:190] acquiring lock for shared ca certs: {Name:mkbe2c97e30f744ec2581d086567acaa8822f7ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:21:31.545084  103523 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key
	I1107 23:21:31.545150  103523 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key
	I1107 23:21:31.545211  103523 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.key
	I1107 23:21:31.545227  103523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.crt with IP's: []
	I1107 23:21:31.694393  103523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.crt ...
	I1107 23:21:31.694426  103523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.crt: {Name:mk92349dc63ba8e3e6d3edfe149a395592ff1910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:21:31.694629  103523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.key ...
	I1107 23:21:31.694654  103523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.key: {Name:mk09f359c088e5fb5045d08dd9aa9fafc5e9921d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:21:31.694761  103523 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.key.cee25041
	I1107 23:21:31.694783  103523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 23:21:31.843374  103523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.crt.cee25041 ...
	I1107 23:21:31.843405  103523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.crt.cee25041: {Name:mk5dff97609dd98956aeceaa8f895d5646785a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:21:31.843576  103523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.key.cee25041 ...
	I1107 23:21:31.843593  103523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.key.cee25041: {Name:mk3138a4707834256192652d76b98ba82b5a41f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:21:31.843694  103523 certs.go:337] copying /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.crt
	I1107 23:21:31.843810  103523 certs.go:341] copying /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.key
	I1107 23:21:31.843892  103523 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/proxy-client.key
	I1107 23:21:31.843917  103523 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/proxy-client.crt with IP's: []
	I1107 23:21:31.983175  103523 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/proxy-client.crt ...
	I1107 23:21:31.983211  103523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/proxy-client.crt: {Name:mk8096e12467a23768733bd3246cd6022d31f230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:21:31.983394  103523 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/proxy-client.key ...
	I1107 23:21:31.983416  103523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/proxy-client.key: {Name:mk0518bff7e241bc62745468063b565442aaeb87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:21:31.983509  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 23:21:31.983548  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 23:21:31.983567  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 23:21:31.983585  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 23:21:31.983598  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:21:31.983614  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:21:31.983633  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:21:31.983659  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:21:31.983717  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211.pem (1338 bytes)
	W1107 23:21:31.983786  103523 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211_empty.pem, impossibly tiny 0 bytes
	I1107 23:21:31.983805  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem (1675 bytes)
	I1107 23:21:31.983845  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:21:31.983876  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:21:31.983912  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem (1675 bytes)
	I1107 23:21:31.984005  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem (1708 bytes)
	I1107 23:21:31.984047  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:21:31.984068  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211.pem -> /usr/share/ca-certificates/16211.pem
	I1107 23:21:31.984089  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem -> /usr/share/ca-certificates/162112.pem
	I1107 23:21:31.986322  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 23:21:32.009703  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1107 23:21:32.032295  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 23:21:32.055339  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1107 23:21:32.079056  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:21:32.101314  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 23:21:32.123444  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:21:32.145599  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:21:32.167266  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:21:32.189467  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211.pem --> /usr/share/ca-certificates/16211.pem (1338 bytes)
	I1107 23:21:32.211649  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem --> /usr/share/ca-certificates/162112.pem (1708 bytes)
	I1107 23:21:32.234052  103523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 23:21:32.251019  103523 ssh_runner.go:195] Run: openssl version
	I1107 23:21:32.256114  103523 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1107 23:21:32.256236  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:21:32.264854  103523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:21:32.268193  103523 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:21:32.268212  103523 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:21:32.268247  103523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:21:32.274736  103523 command_runner.go:130] > b5213941
	I1107 23:21:32.274821  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:21:32.283232  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16211.pem && ln -fs /usr/share/ca-certificates/16211.pem /etc/ssl/certs/16211.pem"
	I1107 23:21:32.291640  103523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16211.pem
	I1107 23:21:32.294700  103523 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 23:08 /usr/share/ca-certificates/16211.pem
	I1107 23:21:32.294753  103523 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:08 /usr/share/ca-certificates/16211.pem
	I1107 23:21:32.294806  103523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16211.pem
	I1107 23:21:32.300804  103523 command_runner.go:130] > 51391683
	I1107 23:21:32.300989  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16211.pem /etc/ssl/certs/51391683.0"
	I1107 23:21:32.309784  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162112.pem && ln -fs /usr/share/ca-certificates/162112.pem /etc/ssl/certs/162112.pem"
	I1107 23:21:32.318216  103523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162112.pem
	I1107 23:21:32.321423  103523 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 23:08 /usr/share/ca-certificates/162112.pem
	I1107 23:21:32.321476  103523 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:08 /usr/share/ca-certificates/162112.pem
	I1107 23:21:32.321522  103523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162112.pem
	I1107 23:21:32.327458  103523 command_runner.go:130] > 3ec20f2e
	I1107 23:21:32.327523  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162112.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:21:32.335758  103523 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:21:32.338780  103523 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:21:32.338820  103523 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:21:32.338860  103523 kubeadm.go:404] StartCluster: {Name:multinode-542158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-542158 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:21:32.338932  103523 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1107 23:21:32.338969  103523 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1107 23:21:32.370827  103523 cri.go:89] found id: ""
	I1107 23:21:32.370900  103523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 23:21:32.379136  103523 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1107 23:21:32.379181  103523 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1107 23:21:32.379193  103523 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1107 23:21:32.379258  103523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 23:21:32.387077  103523 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1107 23:21:32.387128  103523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 23:21:32.395064  103523 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1107 23:21:32.395090  103523 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1107 23:21:32.395101  103523 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1107 23:21:32.395112  103523 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:21:32.395147  103523 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 23:21:32.395184  103523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 23:21:32.439690  103523 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1107 23:21:32.439719  103523 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1107 23:21:32.439887  103523 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 23:21:32.439911  103523 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 23:21:32.474924  103523 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:21:32.474959  103523 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:21:32.475044  103523 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1046-gcp
	I1107 23:21:32.475057  103523 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1046-gcp
	I1107 23:21:32.475103  103523 kubeadm.go:322] OS: Linux
	I1107 23:21:32.475112  103523 command_runner.go:130] > OS: Linux
	I1107 23:21:32.475157  103523 kubeadm.go:322] CGROUPS_CPU: enabled
	I1107 23:21:32.475172  103523 command_runner.go:130] > CGROUPS_CPU: enabled
	I1107 23:21:32.475229  103523 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1107 23:21:32.475236  103523 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1107 23:21:32.475278  103523 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1107 23:21:32.475288  103523 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1107 23:21:32.475326  103523 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1107 23:21:32.475332  103523 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1107 23:21:32.475369  103523 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1107 23:21:32.475375  103523 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1107 23:21:32.475429  103523 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1107 23:21:32.475460  103523 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1107 23:21:32.475529  103523 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1107 23:21:32.475539  103523 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1107 23:21:32.475576  103523 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1107 23:21:32.475583  103523 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1107 23:21:32.475659  103523 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1107 23:21:32.475681  103523 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1107 23:21:32.539939  103523 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:21:32.539962  103523 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 23:21:32.540117  103523 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:21:32.540130  103523 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 23:21:32.540255  103523 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:21:32.540272  103523 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 23:21:32.735906  103523 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:21:32.738201  103523 out.go:204]   - Generating certificates and keys ...
	I1107 23:21:32.735947  103523 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 23:21:32.738327  103523 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1107 23:21:32.738339  103523 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 23:21:32.738441  103523 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1107 23:21:32.738451  103523 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 23:21:33.003089  103523 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:21:33.003117  103523 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 23:21:33.242814  103523 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:21:33.242845  103523 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1107 23:21:33.441529  103523 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 23:21:33.441557  103523 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1107 23:21:33.633157  103523 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 23:21:33.633188  103523 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1107 23:21:33.758951  103523 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 23:21:33.758981  103523 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1107 23:21:33.759132  103523 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-542158] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1107 23:21:33.759161  103523 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-542158] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1107 23:21:33.989472  103523 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 23:21:33.989506  103523 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1107 23:21:33.989666  103523 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-542158] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1107 23:21:33.989685  103523 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-542158] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1107 23:21:34.065108  103523 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:21:34.065127  103523 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 23:21:34.269436  103523 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:21:34.269464  103523 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 23:21:34.654339  103523 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 23:21:34.654395  103523 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1107 23:21:34.654488  103523 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:21:34.654505  103523 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 23:21:34.846527  103523 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:21:34.846561  103523 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 23:21:34.983093  103523 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:21:34.983129  103523 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 23:21:35.117409  103523 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:21:35.117439  103523 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 23:21:35.513221  103523 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:21:35.513262  103523 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 23:21:35.513675  103523 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:21:35.513697  103523 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 23:21:35.516865  103523 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:21:35.519314  103523 out.go:204]   - Booting up control plane ...
	I1107 23:21:35.516909  103523 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 23:21:35.519414  103523 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:21:35.519428  103523 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 23:21:35.519546  103523 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:21:35.519557  103523 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 23:21:35.519636  103523 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:21:35.519663  103523 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 23:21:35.527533  103523 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:21:35.527561  103523 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:21:35.528334  103523 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:21:35.528350  103523 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:21:35.528381  103523 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 23:21:35.528388  103523 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1107 23:21:35.606411  103523 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:21:35.606431  103523 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 23:21:40.608491  103523 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002229 seconds
	I1107 23:21:40.608519  103523 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002229 seconds
	I1107 23:21:40.608668  103523 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:21:40.608683  103523 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1107 23:21:40.623405  103523 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:21:40.623429  103523 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1107 23:21:41.144721  103523 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:21:41.144762  103523 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1107 23:21:41.144947  103523 kubeadm.go:322] [mark-control-plane] Marking the node multinode-542158 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 23:21:41.144976  103523 command_runner.go:130] > [mark-control-plane] Marking the node multinode-542158 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1107 23:21:41.655234  103523 kubeadm.go:322] [bootstrap-token] Using token: czd864.1kl2q5xnkfmanv1k
	I1107 23:21:41.656886  103523 out.go:204]   - Configuring RBAC rules ...
	I1107 23:21:41.655301  103523 command_runner.go:130] > [bootstrap-token] Using token: czd864.1kl2q5xnkfmanv1k
	I1107 23:21:41.657042  103523 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:21:41.657058  103523 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1107 23:21:41.661591  103523 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:21:41.661616  103523 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1107 23:21:41.670538  103523 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:21:41.670567  103523 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1107 23:21:41.673808  103523 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:21:41.673835  103523 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1107 23:21:41.676970  103523 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:21:41.676996  103523 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1107 23:21:41.680426  103523 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:21:41.680449  103523 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1107 23:21:41.692491  103523 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:21:41.692531  103523 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1107 23:21:41.925737  103523 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1107 23:21:41.925766  103523 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1107 23:21:42.084997  103523 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1107 23:21:42.085025  103523 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1107 23:21:42.086288  103523 kubeadm.go:322] 
	I1107 23:21:42.086388  103523 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1107 23:21:42.086404  103523 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1107 23:21:42.086412  103523 kubeadm.go:322] 
	I1107 23:21:42.086511  103523 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1107 23:21:42.086525  103523 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1107 23:21:42.086532  103523 kubeadm.go:322] 
	I1107 23:21:42.086583  103523 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1107 23:21:42.086604  103523 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1107 23:21:42.086704  103523 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:21:42.086724  103523 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1107 23:21:42.086796  103523 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:21:42.086805  103523 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1107 23:21:42.086810  103523 kubeadm.go:322] 
	I1107 23:21:42.086876  103523 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1107 23:21:42.086886  103523 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1107 23:21:42.086891  103523 kubeadm.go:322] 
	I1107 23:21:42.086955  103523 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 23:21:42.086965  103523 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1107 23:21:42.086970  103523 kubeadm.go:322] 
	I1107 23:21:42.087051  103523 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1107 23:21:42.087060  103523 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1107 23:21:42.087159  103523 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:21:42.087176  103523 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1107 23:21:42.087285  103523 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:21:42.087299  103523 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1107 23:21:42.087305  103523 kubeadm.go:322] 
	I1107 23:21:42.087424  103523 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:21:42.087435  103523 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1107 23:21:42.087549  103523 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1107 23:21:42.087560  103523 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1107 23:21:42.087566  103523 kubeadm.go:322] 
	I1107 23:21:42.087695  103523 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token czd864.1kl2q5xnkfmanv1k \
	I1107 23:21:42.087709  103523 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token czd864.1kl2q5xnkfmanv1k \
	I1107 23:21:42.087862  103523 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8a705d15a45dc72008d892e2cff618f0dc1a1c4f33be51e629a7afa6d45ac282 \
	I1107 23:21:42.087881  103523 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8a705d15a45dc72008d892e2cff618f0dc1a1c4f33be51e629a7afa6d45ac282 \
	I1107 23:21:42.087908  103523 kubeadm.go:322] 	--control-plane 
	I1107 23:21:42.087920  103523 command_runner.go:130] > 	--control-plane 
	I1107 23:21:42.087933  103523 kubeadm.go:322] 
	I1107 23:21:42.088033  103523 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:21:42.088043  103523 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1107 23:21:42.088049  103523 kubeadm.go:322] 
	I1107 23:21:42.088158  103523 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token czd864.1kl2q5xnkfmanv1k \
	I1107 23:21:42.088171  103523 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token czd864.1kl2q5xnkfmanv1k \
	I1107 23:21:42.088313  103523 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:8a705d15a45dc72008d892e2cff618f0dc1a1c4f33be51e629a7afa6d45ac282 
	I1107 23:21:42.088326  103523 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:8a705d15a45dc72008d892e2cff618f0dc1a1c4f33be51e629a7afa6d45ac282 
	I1107 23:21:42.090265  103523 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1107 23:21:42.090294  103523 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1107 23:21:42.090401  103523 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:21:42.090413  103523 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:21:42.090431  103523 cni.go:84] Creating CNI manager for ""
	I1107 23:21:42.090441  103523 cni.go:136] 1 nodes found, recommending kindnet
	I1107 23:21:42.092711  103523 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1107 23:21:42.094708  103523 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:21:42.099670  103523 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1107 23:21:42.099701  103523 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I1107 23:21:42.099712  103523 command_runner.go:130] > Device: 37h/55d	Inode: 560775      Links: 1
	I1107 23:21:42.099723  103523 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:21:42.099734  103523 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1107 23:21:42.099744  103523 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1107 23:21:42.099755  103523 command_runner.go:130] > Change: 2023-11-07 23:01:52.156027715 +0000
	I1107 23:21:42.099788  103523 command_runner.go:130] >  Birth: 2023-11-07 23:01:52.132026002 +0000
	I1107 23:21:42.099839  103523 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:21:42.099857  103523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:21:42.117666  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:21:42.807166  103523 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1107 23:21:42.807188  103523 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1107 23:21:42.807196  103523 command_runner.go:130] > serviceaccount/kindnet created
	I1107 23:21:42.807200  103523 command_runner.go:130] > daemonset.apps/kindnet created
	I1107 23:21:42.807231  103523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1107 23:21:42.807338  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:42.807360  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e minikube.k8s.io/name=multinode-542158 minikube.k8s.io/updated_at=2023_11_07T23_21_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:42.814640  103523 command_runner.go:130] > -16
	I1107 23:21:42.814693  103523 ops.go:34] apiserver oom_adj: -16
	I1107 23:21:42.906468  103523 command_runner.go:130] > node/multinode-542158 labeled
	I1107 23:21:42.909100  103523 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1107 23:21:42.909213  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:42.974850  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:42.974967  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:43.043039  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:43.543913  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:43.604426  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:44.043597  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:44.107202  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:44.543226  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:44.604323  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:45.043454  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:45.108832  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:45.543465  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:45.608538  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:46.044161  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:46.106647  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:46.543339  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:46.608245  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:47.043911  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:47.107605  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:47.543877  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:47.609973  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:48.043502  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:48.111705  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:48.544234  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:48.607939  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:49.043301  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:49.108013  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:49.544130  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:49.607007  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:50.044074  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:50.110012  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:50.543938  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:50.607340  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:51.043927  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:51.109664  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:51.544147  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:51.607150  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:52.043194  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:52.110730  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:52.543258  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:52.606771  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:53.043241  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:53.109946  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:53.543491  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:53.606753  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:54.044133  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:54.109093  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:54.543703  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:54.616633  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:55.043290  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:55.107677  103523 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1107 23:21:55.543662  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1107 23:21:55.612584  103523 command_runner.go:130] > NAME      SECRETS   AGE
	I1107 23:21:55.612609  103523 command_runner.go:130] > default   0         0s
	I1107 23:21:55.612642  103523 kubeadm.go:1081] duration metric: took 12.805362965s to wait for elevateKubeSystemPrivileges.
	I1107 23:21:55.612664  103523 kubeadm.go:406] StartCluster complete in 23.273807381s
	I1107 23:21:55.612709  103523 settings.go:142] acquiring lock: {Name:mke2e0b04eb18441805a33c4c4584e304f0bb176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:21:55.612785  103523 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:21:55.613439  103523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/kubeconfig: {Name:mk2d252233a242c1461c7aa60d2f37a37a1be73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:21:55.613703  103523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1107 23:21:55.613833  103523 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1107 23:21:55.613936  103523 addons.go:69] Setting storage-provisioner=true in profile "multinode-542158"
	I1107 23:21:55.613940  103523 config.go:182] Loaded profile config "multinode-542158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:21:55.613949  103523 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:21:55.613960  103523 addons.go:69] Setting default-storageclass=true in profile "multinode-542158"
	I1107 23:21:55.613982  103523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-542158"
	I1107 23:21:55.613963  103523 addons.go:231] Setting addon storage-provisioner=true in "multinode-542158"
	I1107 23:21:55.614121  103523 host.go:66] Checking if "multinode-542158" exists ...
	I1107 23:21:55.614241  103523 kapi.go:59] client config for multinode-542158: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:21:55.614369  103523 cli_runner.go:164] Run: docker container inspect multinode-542158 --format={{.State.Status}}
	I1107 23:21:55.614719  103523 cli_runner.go:164] Run: docker container inspect multinode-542158 --format={{.State.Status}}
	I1107 23:21:55.614849  103523 cert_rotation.go:137] Starting client certificate rotation controller
	I1107 23:21:55.615107  103523 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:21:55.615121  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:55.615129  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:55.615136  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:55.625476  103523 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1107 23:21:55.625498  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:55.625505  103523 round_trippers.go:580]     Content-Length: 291
	I1107 23:21:55.625510  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:55 GMT
	I1107 23:21:55.625516  103523 round_trippers.go:580]     Audit-Id: beaa77da-875e-4037-a17a-a6a7c7bb50dc
	I1107 23:21:55.625521  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:55.625526  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:55.625530  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:55.625535  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:55.625560  103523 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"eb618563-1594-48e0-bbf3-afdea9801507","resourceVersion":"297","creationTimestamp":"2023-11-07T23:21:41Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1107 23:21:55.625910  103523 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"eb618563-1594-48e0-bbf3-afdea9801507","resourceVersion":"297","creationTimestamp":"2023-11-07T23:21:41Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1107 23:21:55.625966  103523 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:21:55.625979  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:55.625986  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:55.625991  103523 round_trippers.go:473]     Content-Type: application/json
	I1107 23:21:55.625997  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:55.632881  103523 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1107 23:21:55.632909  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:55.632919  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:55.632926  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:55.632933  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:55.632940  103523 round_trippers.go:580]     Content-Length: 291
	I1107 23:21:55.632948  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:55 GMT
	I1107 23:21:55.632959  103523 round_trippers.go:580]     Audit-Id: fe34e064-21c2-455a-a8fc-3a41d305f32d
	I1107 23:21:55.632967  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:55.632995  103523 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"eb618563-1594-48e0-bbf3-afdea9801507","resourceVersion":"318","creationTimestamp":"2023-11-07T23:21:41Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1107 23:21:55.633184  103523 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:21:55.633203  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:55.633210  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:55.633215  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:55.635131  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:21:55.635154  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:55.635161  103523 round_trippers.go:580]     Audit-Id: 12f2412e-7a76-49d8-b031-ff981f494796
	I1107 23:21:55.635166  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:55.635171  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:55.635177  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:55.635184  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:55.635193  103523 round_trippers.go:580]     Content-Length: 291
	I1107 23:21:55.635205  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:55 GMT
	I1107 23:21:55.635232  103523 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"eb618563-1594-48e0-bbf3-afdea9801507","resourceVersion":"318","creationTimestamp":"2023-11-07T23:21:41Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1107 23:21:55.635332  103523 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-542158" context rescaled to 1 replicas
	I1107 23:21:55.635374  103523 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1107 23:21:55.638189  103523 out.go:177] * Verifying Kubernetes components...
	I1107 23:21:55.638363  103523 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:21:55.640558  103523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:21:55.640874  103523 kapi.go:59] client config for multinode-542158: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:21:55.641221  103523 addons.go:231] Setting addon default-storageclass=true in "multinode-542158"
	I1107 23:21:55.641264  103523 host.go:66] Checking if "multinode-542158" exists ...
	I1107 23:21:55.641820  103523 cli_runner.go:164] Run: docker container inspect multinode-542158 --format={{.State.Status}}
	I1107 23:21:55.643882  103523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 23:21:55.645739  103523 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:21:55.645765  103523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1107 23:21:55.645823  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:21:55.663800  103523 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1107 23:21:55.663824  103523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1107 23:21:55.663869  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:21:55.666197  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa Username:docker}
	I1107 23:21:55.681351  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa Username:docker}
	I1107 23:21:55.795351  103523 command_runner.go:130] > apiVersion: v1
	I1107 23:21:55.795379  103523 command_runner.go:130] > data:
	I1107 23:21:55.795387  103523 command_runner.go:130] >   Corefile: |
	I1107 23:21:55.795394  103523 command_runner.go:130] >     .:53 {
	I1107 23:21:55.795400  103523 command_runner.go:130] >         errors
	I1107 23:21:55.795408  103523 command_runner.go:130] >         health {
	I1107 23:21:55.795416  103523 command_runner.go:130] >            lameduck 5s
	I1107 23:21:55.795422  103523 command_runner.go:130] >         }
	I1107 23:21:55.795429  103523 command_runner.go:130] >         ready
	I1107 23:21:55.795439  103523 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1107 23:21:55.795450  103523 command_runner.go:130] >            pods insecure
	I1107 23:21:55.795459  103523 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1107 23:21:55.795469  103523 command_runner.go:130] >            ttl 30
	I1107 23:21:55.795490  103523 command_runner.go:130] >         }
	I1107 23:21:55.795501  103523 command_runner.go:130] >         prometheus :9153
	I1107 23:21:55.795509  103523 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1107 23:21:55.795518  103523 command_runner.go:130] >            max_concurrent 1000
	I1107 23:21:55.795528  103523 command_runner.go:130] >         }
	I1107 23:21:55.795535  103523 command_runner.go:130] >         cache 30
	I1107 23:21:55.795546  103523 command_runner.go:130] >         loop
	I1107 23:21:55.795555  103523 command_runner.go:130] >         reload
	I1107 23:21:55.795563  103523 command_runner.go:130] >         loadbalance
	I1107 23:21:55.795572  103523 command_runner.go:130] >     }
	I1107 23:21:55.795579  103523 command_runner.go:130] > kind: ConfigMap
	I1107 23:21:55.795588  103523 command_runner.go:130] > metadata:
	I1107 23:21:55.795599  103523 command_runner.go:130] >   creationTimestamp: "2023-11-07T23:21:41Z"
	I1107 23:21:55.795609  103523 command_runner.go:130] >   name: coredns
	I1107 23:21:55.795617  103523 command_runner.go:130] >   namespace: kube-system
	I1107 23:21:55.795633  103523 command_runner.go:130] >   resourceVersion: "219"
	I1107 23:21:55.795648  103523 command_runner.go:130] >   uid: 9087f55a-f85d-40eb-8cdb-eaaaa3aca198
	I1107 23:21:55.795866  103523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1107 23:21:55.796120  103523 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:21:55.796456  103523 kapi.go:59] client config for multinode-542158: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:21:55.796761  103523 node_ready.go:35] waiting up to 6m0s for node "multinode-542158" to be "Ready" ...
	I1107 23:21:55.796860  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:55.796872  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:55.796883  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:55.796893  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:55.801944  103523 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 23:21:55.801974  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:55.801984  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:55.801991  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:55.801999  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:55.802006  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:55.802013  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:55 GMT
	I1107 23:21:55.802030  103523 round_trippers.go:580]     Audit-Id: ec740a6d-17e3-4fe1-ad7a-22cde442f9ac
	I1107 23:21:55.802170  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"298","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1107 23:21:55.802911  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:55.802930  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:55.802942  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:55.802952  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:55.804997  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:55.805018  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:55.805026  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:55.805034  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:55.805042  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:55.805055  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:55 GMT
	I1107 23:21:55.805063  103523 round_trippers.go:580]     Audit-Id: 2bcb8959-05f6-46c9-9fd1-f5fb12da24b4
	I1107 23:21:55.805071  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:55.805237  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"298","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1107 23:21:55.900455  103523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1107 23:21:55.900478  103523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1107 23:21:56.305898  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:56.305918  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:56.305926  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:56.305932  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:56.308739  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:56.308772  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:56.308783  103523 round_trippers.go:580]     Audit-Id: a67fc7a7-6409-4576-8b65-037ca0438b3a
	I1107 23:21:56.308792  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:56.308801  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:56.308810  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:56.308825  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:56.308834  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:56 GMT
	I1107 23:21:56.308965  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"298","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1107 23:21:56.398871  103523 command_runner.go:130] > configmap/coredns replaced
	I1107 23:21:56.403723  103523 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1107 23:21:56.692052  103523 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1107 23:21:56.697009  103523 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1107 23:21:56.707455  103523 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1107 23:21:56.714564  103523 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1107 23:21:56.722024  103523 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1107 23:21:56.731532  103523 command_runner.go:130] > pod/storage-provisioner created
	I1107 23:21:56.736965  103523 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1107 23:21:56.737154  103523 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1107 23:21:56.737192  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:56.737205  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:56.737215  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:56.739297  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:56.739315  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:56.739323  103523 round_trippers.go:580]     Audit-Id: 13c6a8d0-c743-42ec-aed8-47cd6533a8c1
	I1107 23:21:56.739328  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:56.739333  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:56.739338  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:56.739343  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:56.739351  103523 round_trippers.go:580]     Content-Length: 1273
	I1107 23:21:56.739359  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:56 GMT
	I1107 23:21:56.739411  103523 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"369"},"items":[{"metadata":{"name":"standard","uid":"67dda000-16e7-48af-9080-4fa5f391507f","resourceVersion":"356","creationTimestamp":"2023-11-07T23:21:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-07T23:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1107 23:21:56.739783  103523 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"67dda000-16e7-48af-9080-4fa5f391507f","resourceVersion":"356","creationTimestamp":"2023-11-07T23:21:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-07T23:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1107 23:21:56.739845  103523 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1107 23:21:56.739854  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:56.739861  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:56.739869  103523 round_trippers.go:473]     Content-Type: application/json
	I1107 23:21:56.739878  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:56.744595  103523 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 23:21:56.744624  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:56.744632  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:56.744637  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:56.744681  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:56.744695  103523 round_trippers.go:580]     Content-Length: 1220
	I1107 23:21:56.744702  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:56 GMT
	I1107 23:21:56.744718  103523 round_trippers.go:580]     Audit-Id: 3b34b865-d65d-44d0-9b51-2c007660d957
	I1107 23:21:56.744729  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:56.744760  103523 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"67dda000-16e7-48af-9080-4fa5f391507f","resourceVersion":"356","creationTimestamp":"2023-11-07T23:21:56Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-07T23:21:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1107 23:21:56.747087  103523 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1107 23:21:56.748604  103523 addons.go:502] enable addons completed in 1.134768677s: enabled=[storage-provisioner default-storageclass]
	I1107 23:21:56.806873  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:56.806893  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:56.806902  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:56.806909  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:56.809541  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:56.809562  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:56.809570  103523 round_trippers.go:580]     Audit-Id: 67077ebe-0289-4cc8-8c75-80ef6ccb9005
	I1107 23:21:56.809576  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:56.809584  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:56.809593  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:56.809602  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:56.809611  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:56 GMT
	I1107 23:21:56.809771  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"298","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1107 23:21:57.306492  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:57.306523  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:57.306536  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:57.306546  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:57.311858  103523 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1107 23:21:57.311881  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:57.311888  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:57.311893  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:57.311898  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:57.311903  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:57 GMT
	I1107 23:21:57.311908  103523 round_trippers.go:580]     Audit-Id: 1bfe6ce2-23f0-4b65-b8a2-2ae55c3c7e75
	I1107 23:21:57.311916  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:57.312029  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:21:57.312320  103523 node_ready.go:49] node "multinode-542158" has status "Ready":"True"
	I1107 23:21:57.312335  103523 node_ready.go:38] duration metric: took 1.515538242s waiting for node "multinode-542158" to be "Ready" ...
	I1107 23:21:57.312344  103523 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:21:57.312415  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:21:57.312423  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:57.312430  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:57.312437  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:57.315349  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:57.315373  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:57.315383  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:57.315390  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:57.315398  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:57 GMT
	I1107 23:21:57.315406  103523 round_trippers.go:580]     Audit-Id: 7269edba-18f6-4d3f-84dd-daf33e4c007c
	I1107 23:21:57.315413  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:57.315420  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:57.315917  103523 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"379"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d4f2j","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"357e0565-e17e-4d94-9a73-7bd0152ba3af","resourceVersion":"379","creationTimestamp":"2023-11-07T23:21:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54149 chars]
	I1107 23:21:57.318816  103523 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d4f2j" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:57.318888  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d4f2j
	I1107 23:21:57.318896  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:57.318904  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:57.318910  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:57.321144  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:57.321166  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:57.321175  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:57.321183  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:57.321190  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:57 GMT
	I1107 23:21:57.321197  103523 round_trippers.go:580]     Audit-Id: 0c663761-3025-4843-bfd1-8d52c1b5b809
	I1107 23:21:57.321208  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:57.321216  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:57.321342  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d4f2j","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"357e0565-e17e-4d94-9a73-7bd0152ba3af","resourceVersion":"379","creationTimestamp":"2023-11-07T23:21:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1107 23:21:57.321789  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:57.321804  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:57.321811  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:57.321817  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:57.323647  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:21:57.323665  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:57.323674  103523 round_trippers.go:580]     Audit-Id: d0093854-21ad-400e-b4dd-d199715ec354
	I1107 23:21:57.323679  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:57.323685  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:57.323690  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:57.323696  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:57.323701  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:57 GMT
	I1107 23:21:57.323863  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:21:57.324187  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d4f2j
	I1107 23:21:57.324198  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:57.324206  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:57.324211  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:57.326068  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:21:57.326082  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:57.326088  103523 round_trippers.go:580]     Audit-Id: 2b51ffc4-3b71-4eac-a85d-4ebfdae64aac
	I1107 23:21:57.326094  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:57.326099  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:57.326104  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:57.326109  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:57.326114  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:57 GMT
	I1107 23:21:57.326226  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d4f2j","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"357e0565-e17e-4d94-9a73-7bd0152ba3af","resourceVersion":"379","creationTimestamp":"2023-11-07T23:21:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1107 23:21:57.326606  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:57.326619  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:57.326626  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:57.326631  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:57.328630  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:21:57.328646  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:57.328652  103523 round_trippers.go:580]     Audit-Id: 79944619-6976-4a06-83dc-b1cdfce56178
	I1107 23:21:57.328659  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:57.328668  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:57.328676  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:57.328685  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:57.328696  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:57 GMT
	I1107 23:21:57.328832  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:21:57.830051  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d4f2j
	I1107 23:21:57.830083  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:57.830091  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:57.830097  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:57.832517  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:57.832544  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:57.832554  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:57.832563  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:57 GMT
	I1107 23:21:57.832570  103523 round_trippers.go:580]     Audit-Id: 9663318a-9f3b-46aa-8441-098ca4c12416
	I1107 23:21:57.832578  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:57.832588  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:57.832597  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:57.832752  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d4f2j","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"357e0565-e17e-4d94-9a73-7bd0152ba3af","resourceVersion":"379","creationTimestamp":"2023-11-07T23:21:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1107 23:21:57.833204  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:57.833219  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:57.833227  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:57.833237  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:57.835262  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:57.835278  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:57.835288  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:57.835298  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:57 GMT
	I1107 23:21:57.835310  103523 round_trippers.go:580]     Audit-Id: 880466b5-1f4c-4364-8eb9-80cd08d6fdd6
	I1107 23:21:57.835320  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:57.835328  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:57.835333  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:57.835486  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:21:58.330265  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d4f2j
	I1107 23:21:58.330297  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:58.330310  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:58.330320  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:58.332826  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:58.332848  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:58.332856  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:58.332863  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:58.332871  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:58.332879  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:58 GMT
	I1107 23:21:58.332886  103523 round_trippers.go:580]     Audit-Id: 093ff775-16d0-4163-86d2-0cd2cfca6e16
	I1107 23:21:58.332894  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:58.333023  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d4f2j","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"357e0565-e17e-4d94-9a73-7bd0152ba3af","resourceVersion":"389","creationTimestamp":"2023-11-07T23:21:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1107 23:21:58.333521  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:58.333536  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:58.333544  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:58.333549  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:58.335603  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:58.335623  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:58.335632  103523 round_trippers.go:580]     Audit-Id: c78fbda7-b19f-428a-a14e-528ce9d3c52f
	I1107 23:21:58.335640  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:58.335648  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:58.335659  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:58.335668  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:58.335677  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:58 GMT
	I1107 23:21:58.335907  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:21:58.336226  103523 pod_ready.go:92] pod "coredns-5dd5756b68-d4f2j" in "kube-system" namespace has status "Ready":"True"
	I1107 23:21:58.336244  103523 pod_ready.go:81] duration metric: took 1.017404909s waiting for pod "coredns-5dd5756b68-d4f2j" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:58.336272  103523 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:58.336337  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-542158
	I1107 23:21:58.336346  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:58.336358  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:58.336368  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:58.338302  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:21:58.338322  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:58.338329  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:58.338337  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:58.338344  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:58 GMT
	I1107 23:21:58.338351  103523 round_trippers.go:580]     Audit-Id: a6bc5f76-f72b-4be9-b325-4bdddf82ed6e
	I1107 23:21:58.338359  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:58.338367  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:58.338446  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-542158","namespace":"kube-system","uid":"ff322856-032e-409e-a32e-937f41b80534","resourceVersion":"279","creationTimestamp":"2023-11-07T23:21:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8118049fe5aee964ee5c4fa55a555ba4","kubernetes.io/config.mirror":"8118049fe5aee964ee5c4fa55a555ba4","kubernetes.io/config.seen":"2023-11-07T23:21:41.992952425Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1107 23:21:58.338789  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:58.338799  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:58.338808  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:58.338815  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:58.340673  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:21:58.340692  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:58.340702  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:58.340710  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:58 GMT
	I1107 23:21:58.340718  103523 round_trippers.go:580]     Audit-Id: 9e7e8393-db71-467b-a791-1d8b5a28a98e
	I1107 23:21:58.340726  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:58.340739  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:58.340748  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:58.340884  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:21:58.341179  103523 pod_ready.go:92] pod "etcd-multinode-542158" in "kube-system" namespace has status "Ready":"True"
	I1107 23:21:58.341195  103523 pod_ready.go:81] duration metric: took 4.91201ms waiting for pod "etcd-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:58.341206  103523 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:58.341251  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-542158
	I1107 23:21:58.341259  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:58.341265  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:58.341271  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:58.343104  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:21:58.343122  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:58.343147  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:58.343157  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:58 GMT
	I1107 23:21:58.343166  103523 round_trippers.go:580]     Audit-Id: e86b9593-df9b-44aa-8c97-e72ea238ae4e
	I1107 23:21:58.343179  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:58.343194  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:58.343203  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:58.343388  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-542158","namespace":"kube-system","uid":"0a1da361-805c-4b8f-a3db-88e4834e12cb","resourceVersion":"253","creationTimestamp":"2023-11-07T23:21:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"9042e7c5330cfcad3544cd17028012a6","kubernetes.io/config.mirror":"9042e7c5330cfcad3544cd17028012a6","kubernetes.io/config.seen":"2023-11-07T23:21:41.992954381Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1107 23:21:58.343793  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:58.343810  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:58.343818  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:58.343825  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:58.345546  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:21:58.345565  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:58.345573  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:58 GMT
	I1107 23:21:58.345582  103523 round_trippers.go:580]     Audit-Id: f059d6cb-544c-43bc-8a21-df528bf89ebd
	I1107 23:21:58.345590  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:58.345606  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:58.345614  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:58.345626  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:58.345756  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:21:58.346062  103523 pod_ready.go:92] pod "kube-apiserver-multinode-542158" in "kube-system" namespace has status "Ready":"True"
	I1107 23:21:58.346078  103523 pod_ready.go:81] duration metric: took 4.865895ms waiting for pod "kube-apiserver-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:58.346087  103523 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:58.346136  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-542158
	I1107 23:21:58.346146  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:58.346153  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:58.346166  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:58.347958  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:21:58.347979  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:58.347988  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:58 GMT
	I1107 23:21:58.347997  103523 round_trippers.go:580]     Audit-Id: 8f697f35-f64b-4e5b-b607-0b80d1f6ba64
	I1107 23:21:58.348006  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:58.348015  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:58.348027  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:58.348036  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:58.348199  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-542158","namespace":"kube-system","uid":"76366db7-a73c-4b9d-a0a5-e572a95585c6","resourceVersion":"267","creationTimestamp":"2023-11-07T23:21:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"63686c3fbb92c907ba59d1d8ac68e4fc","kubernetes.io/config.mirror":"63686c3fbb92c907ba59d1d8ac68e4fc","kubernetes.io/config.seen":"2023-11-07T23:21:36.059217058Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1107 23:21:58.506927  103523 request.go:629] Waited for 158.32147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:58.506996  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:58.507004  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:58.507011  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:58.507017  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:58.509242  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:58.509264  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:58.509273  103523 round_trippers.go:580]     Audit-Id: 71b31d64-104d-45a3-bdee-262ea355921a
	I1107 23:21:58.509281  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:58.509289  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:58.509296  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:58.509310  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:58.509323  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:58 GMT
	I1107 23:21:58.509529  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:21:58.509938  103523 pod_ready.go:92] pod "kube-controller-manager-multinode-542158" in "kube-system" namespace has status "Ready":"True"
	I1107 23:21:58.509958  103523 pod_ready.go:81] duration metric: took 163.863345ms waiting for pod "kube-controller-manager-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:58.509975  103523 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5m8jq" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:58.707490  103523 request.go:629] Waited for 197.447546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5m8jq
	I1107 23:21:58.707573  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5m8jq
	I1107 23:21:58.707578  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:58.707586  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:58.707592  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:58.710073  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:58.710102  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:58.710114  103523 round_trippers.go:580]     Audit-Id: d445a0bc-3b74-47d4-bbb8-6b820df103dc
	I1107 23:21:58.710122  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:58.710129  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:58.710137  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:58.710145  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:58.710152  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:58 GMT
	I1107 23:21:58.710323  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5m8jq","generateName":"kube-proxy-","namespace":"kube-system","uid":"546186cc-fa1d-43c0-8dea-81bfe7a6a835","resourceVersion":"370","creationTimestamp":"2023-11-07T23:21:55Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a73ed559-5e99-4814-8b20-df2d69624bd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a73ed559-5e99-4814-8b20-df2d69624bd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1107 23:21:58.907267  103523 request.go:629] Waited for 196.414578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:58.907328  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:58.907333  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:58.907340  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:58.907346  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:58.909771  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:58.909837  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:58.909855  103523 round_trippers.go:580]     Audit-Id: 5f276600-e934-43b0-841d-ea2196f48487
	I1107 23:21:58.909865  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:58.909875  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:58.909888  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:58.909903  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:58.909918  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:58 GMT
	I1107 23:21:58.910039  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:21:58.910362  103523 pod_ready.go:92] pod "kube-proxy-5m8jq" in "kube-system" namespace has status "Ready":"True"
	I1107 23:21:58.910388  103523 pod_ready.go:81] duration metric: took 400.401641ms waiting for pod "kube-proxy-5m8jq" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:58.910402  103523 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:59.106867  103523 request.go:629] Waited for 196.380148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-542158
	I1107 23:21:59.106943  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-542158
	I1107 23:21:59.106948  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:59.106956  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:59.106965  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:59.109355  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:59.109385  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:59.109397  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:59 GMT
	I1107 23:21:59.109404  103523 round_trippers.go:580]     Audit-Id: deac2ac4-fbdd-4eae-838a-b5f7cd11d8e9
	I1107 23:21:59.109411  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:59.109419  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:59.109428  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:59.109440  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:59.109579  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-542158","namespace":"kube-system","uid":"ec3b8184-4819-4a08-8361-f951b553564c","resourceVersion":"255","creationTimestamp":"2023-11-07T23:21:42Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"27daf2b049ce91bfd4f81b0138764b44","kubernetes.io/config.mirror":"27daf2b049ce91bfd4f81b0138764b44","kubernetes.io/config.seen":"2023-11-07T23:21:41.992950355Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1107 23:21:59.307351  103523 request.go:629] Waited for 197.402337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:59.307431  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:21:59.307436  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:59.307444  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:59.307450  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:59.309762  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:59.309786  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:59.309796  103523 round_trippers.go:580]     Audit-Id: 6f791cfb-272f-4c05-be30-c96aa8c74b70
	I1107 23:21:59.309804  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:59.309812  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:59.309819  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:59.309826  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:59.309834  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:59 GMT
	I1107 23:21:59.309927  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:21:59.310239  103523 pod_ready.go:92] pod "kube-scheduler-multinode-542158" in "kube-system" namespace has status "Ready":"True"
	I1107 23:21:59.310256  103523 pod_ready.go:81] duration metric: took 399.8456ms waiting for pod "kube-scheduler-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:21:59.310272  103523 pod_ready.go:38] duration metric: took 1.997901507s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:21:59.310292  103523 api_server.go:52] waiting for apiserver process to appear ...
	I1107 23:21:59.310359  103523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:21:59.321380  103523 command_runner.go:130] > 1426
	I1107 23:21:59.321410  103523 api_server.go:72] duration metric: took 3.686006613s to wait for apiserver process to appear ...
	I1107 23:21:59.321418  103523 api_server.go:88] waiting for apiserver healthz status ...
	I1107 23:21:59.321432  103523 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1107 23:21:59.325467  103523 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1107 23:21:59.325533  103523 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1107 23:21:59.325541  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:59.325549  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:59.325557  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:59.326469  103523 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1107 23:21:59.326485  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:59.326495  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:59.326507  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:59.326517  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:59.326524  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:59.326530  103523 round_trippers.go:580]     Content-Length: 264
	I1107 23:21:59.326537  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:59 GMT
	I1107 23:21:59.326542  103523 round_trippers.go:580]     Audit-Id: f5b805d5-1d16-454e-9a8f-2197aa71cad5
	I1107 23:21:59.326558  103523 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1107 23:21:59.326654  103523 api_server.go:141] control plane version: v1.28.3
	I1107 23:21:59.326678  103523 api_server.go:131] duration metric: took 5.253495ms to wait for apiserver health ...
	I1107 23:21:59.326688  103523 system_pods.go:43] waiting for kube-system pods to appear ...
	I1107 23:21:59.507096  103523 request.go:629] Waited for 180.340056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:21:59.507181  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:21:59.507188  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:59.507205  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:59.507220  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:59.510524  103523 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:21:59.510551  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:59.510564  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:59 GMT
	I1107 23:21:59.510571  103523 round_trippers.go:580]     Audit-Id: d5d6ca67-c2e8-406a-8dab-4842cf5b6e69
	I1107 23:21:59.510577  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:59.510585  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:59.510591  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:59.510600  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:59.511043  103523 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"399"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d4f2j","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"357e0565-e17e-4d94-9a73-7bd0152ba3af","resourceVersion":"389","creationTimestamp":"2023-11-07T23:21:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1107 23:21:59.512829  103523 system_pods.go:59] 8 kube-system pods found
	I1107 23:21:59.512855  103523 system_pods.go:61] "coredns-5dd5756b68-d4f2j" [357e0565-e17e-4d94-9a73-7bd0152ba3af] Running
	I1107 23:21:59.512860  103523 system_pods.go:61] "etcd-multinode-542158" [ff322856-032e-409e-a32e-937f41b80534] Running
	I1107 23:21:59.512864  103523 system_pods.go:61] "kindnet-7hgsm" [3d31a034-7445-45d3-9ad0-6dc7e44d4513] Running
	I1107 23:21:59.512870  103523 system_pods.go:61] "kube-apiserver-multinode-542158" [0a1da361-805c-4b8f-a3db-88e4834e12cb] Running
	I1107 23:21:59.512876  103523 system_pods.go:61] "kube-controller-manager-multinode-542158" [76366db7-a73c-4b9d-a0a5-e572a95585c6] Running
	I1107 23:21:59.512881  103523 system_pods.go:61] "kube-proxy-5m8jq" [546186cc-fa1d-43c0-8dea-81bfe7a6a835] Running
	I1107 23:21:59.512885  103523 system_pods.go:61] "kube-scheduler-multinode-542158" [ec3b8184-4819-4a08-8361-f951b553564c] Running
	I1107 23:21:59.512892  103523 system_pods.go:61] "storage-provisioner" [cd4b23c6-8cf3-4f1a-909d-4f727d1ecebd] Running
	I1107 23:21:59.512898  103523 system_pods.go:74] duration metric: took 186.202442ms to wait for pod list to return data ...
	I1107 23:21:59.512914  103523 default_sa.go:34] waiting for default service account to be created ...
	I1107 23:21:59.707439  103523 request.go:629] Waited for 194.454969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1107 23:21:59.707519  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1107 23:21:59.707524  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:59.707532  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:59.707542  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:59.710090  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:21:59.710117  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:59.710138  103523 round_trippers.go:580]     Content-Length: 261
	I1107 23:21:59.710144  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:59 GMT
	I1107 23:21:59.710149  103523 round_trippers.go:580]     Audit-Id: 41a92d82-24da-4990-bfef-ab453ef7011e
	I1107 23:21:59.710154  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:59.710160  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:59.710165  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:59.710173  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:59.710198  103523 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"400"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c20ed9e5-376c-4f41-9823-8893ad6627b4","resourceVersion":"301","creationTimestamp":"2023-11-07T23:21:55Z"}}]}
	I1107 23:21:59.710368  103523 default_sa.go:45] found service account: "default"
	I1107 23:21:59.710389  103523 default_sa.go:55] duration metric: took 197.4692ms for default service account to be created ...
	I1107 23:21:59.710396  103523 system_pods.go:116] waiting for k8s-apps to be running ...
	I1107 23:21:59.906833  103523 request.go:629] Waited for 196.362026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:21:59.906885  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:21:59.906890  103523 round_trippers.go:469] Request Headers:
	I1107 23:21:59.906897  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:21:59.906903  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:21:59.909998  103523 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1107 23:21:59.910022  103523 round_trippers.go:577] Response Headers:
	I1107 23:21:59.910044  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:21:59.910053  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:21:59.910061  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:21:59 GMT
	I1107 23:21:59.910069  103523 round_trippers.go:580]     Audit-Id: 4c509b58-5d3a-49dd-bba5-102d3234ab9d
	I1107 23:21:59.910082  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:21:59.910088  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:21:59.910531  103523 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"400"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d4f2j","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"357e0565-e17e-4d94-9a73-7bd0152ba3af","resourceVersion":"389","creationTimestamp":"2023-11-07T23:21:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1107 23:21:59.912244  103523 system_pods.go:86] 8 kube-system pods found
	I1107 23:21:59.912265  103523 system_pods.go:89] "coredns-5dd5756b68-d4f2j" [357e0565-e17e-4d94-9a73-7bd0152ba3af] Running
	I1107 23:21:59.912270  103523 system_pods.go:89] "etcd-multinode-542158" [ff322856-032e-409e-a32e-937f41b80534] Running
	I1107 23:21:59.912275  103523 system_pods.go:89] "kindnet-7hgsm" [3d31a034-7445-45d3-9ad0-6dc7e44d4513] Running
	I1107 23:21:59.912279  103523 system_pods.go:89] "kube-apiserver-multinode-542158" [0a1da361-805c-4b8f-a3db-88e4834e12cb] Running
	I1107 23:21:59.912284  103523 system_pods.go:89] "kube-controller-manager-multinode-542158" [76366db7-a73c-4b9d-a0a5-e572a95585c6] Running
	I1107 23:21:59.912292  103523 system_pods.go:89] "kube-proxy-5m8jq" [546186cc-fa1d-43c0-8dea-81bfe7a6a835] Running
	I1107 23:21:59.912327  103523 system_pods.go:89] "kube-scheduler-multinode-542158" [ec3b8184-4819-4a08-8361-f951b553564c] Running
	I1107 23:21:59.912338  103523 system_pods.go:89] "storage-provisioner" [cd4b23c6-8cf3-4f1a-909d-4f727d1ecebd] Running
	I1107 23:21:59.912343  103523 system_pods.go:126] duration metric: took 201.941755ms to wait for k8s-apps to be running ...
	I1107 23:21:59.912351  103523 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:21:59.912396  103523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:21:59.923396  103523 system_svc.go:56] duration metric: took 11.035358ms WaitForService to wait for kubelet.
	I1107 23:21:59.923424  103523 kubeadm.go:581] duration metric: took 4.288020205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:21:59.923443  103523 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:22:00.106861  103523 request.go:629] Waited for 183.350191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1107 23:22:00.106930  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1107 23:22:00.106936  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:00.106943  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:00.106955  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:00.109398  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:00.109419  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:00.109426  103523 round_trippers.go:580]     Audit-Id: a324a4ca-8d62-4536-8dba-7116d1e2bb1d
	I1107 23:22:00.109432  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:00.109437  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:00.109442  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:00.109447  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:00.109453  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:00 GMT
	I1107 23:22:00.109547  103523 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"400"},"items":[{"metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I1107 23:22:00.109923  103523 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 23:22:00.109944  103523 node_conditions.go:123] node cpu capacity is 8
	I1107 23:22:00.109954  103523 node_conditions.go:105] duration metric: took 186.507385ms to run NodePressure ...
	I1107 23:22:00.109964  103523 start.go:228] waiting for startup goroutines ...
	I1107 23:22:00.109970  103523 start.go:233] waiting for cluster config update ...
	I1107 23:22:00.109980  103523 start.go:242] writing updated cluster config ...
	I1107 23:22:00.112668  103523 out.go:177] 
	I1107 23:22:00.114463  103523 config.go:182] Loaded profile config "multinode-542158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:22:00.114538  103523 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/config.json ...
	I1107 23:22:00.116396  103523 out.go:177] * Starting worker node multinode-542158-m02 in cluster multinode-542158
	I1107 23:22:00.117944  103523 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:22:00.119721  103523 out.go:177] * Pulling base image ...
	I1107 23:22:00.123880  103523 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:22:00.123912  103523 cache.go:56] Caching tarball of preloaded images
	I1107 23:22:00.123980  103523 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:22:00.124023  103523 preload.go:174] Found /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1107 23:22:00.124042  103523 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:22:00.124131  103523 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/config.json ...
	I1107 23:22:00.140515  103523 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 23:22:00.140542  103523 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 23:22:00.140562  103523 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:22:00.140599  103523 start.go:365] acquiring machines lock for multinode-542158-m02: {Name:mk5c1e7cd01eaccadeaefd767cf8c6314b4151bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:22:00.140714  103523 start.go:369] acquired machines lock for "multinode-542158-m02" in 92.302µs
	I1107 23:22:00.140743  103523 start.go:93] Provisioning new machine with config: &{Name:multinode-542158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-542158 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:22:00.140822  103523 start.go:125] createHost starting for "m02" (driver="docker")
	I1107 23:22:00.143184  103523 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 23:22:00.143301  103523 start.go:159] libmachine.API.Create for "multinode-542158" (driver="docker")
	I1107 23:22:00.143324  103523 client.go:168] LocalClient.Create starting
	I1107 23:22:00.143405  103523 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem
	I1107 23:22:00.143444  103523 main.go:141] libmachine: Decoding PEM data...
	I1107 23:22:00.143467  103523 main.go:141] libmachine: Parsing certificate...
	I1107 23:22:00.143534  103523 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem
	I1107 23:22:00.143559  103523 main.go:141] libmachine: Decoding PEM data...
	I1107 23:22:00.143571  103523 main.go:141] libmachine: Parsing certificate...
	I1107 23:22:00.143822  103523 cli_runner.go:164] Run: docker network inspect multinode-542158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:22:00.163965  103523 network_create.go:77] Found existing network {name:multinode-542158 subnet:0xc002cfe2a0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1107 23:22:00.164023  103523 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-542158-m02" container
	I1107 23:22:00.164102  103523 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 23:22:00.182147  103523 cli_runner.go:164] Run: docker volume create multinode-542158-m02 --label name.minikube.sigs.k8s.io=multinode-542158-m02 --label created_by.minikube.sigs.k8s.io=true
	I1107 23:22:00.200147  103523 oci.go:103] Successfully created a docker volume multinode-542158-m02
	I1107 23:22:00.200239  103523 cli_runner.go:164] Run: docker run --rm --name multinode-542158-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-542158-m02 --entrypoint /usr/bin/test -v multinode-542158-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 23:22:00.768529  103523 oci.go:107] Successfully prepared a docker volume multinode-542158-m02
	I1107 23:22:00.768561  103523 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:22:00.768588  103523 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 23:22:00.768663  103523 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-542158-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 23:22:05.982525  103523 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-542158-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.213822159s)
	I1107 23:22:05.982569  103523 kic.go:203] duration metric: took 5.213980 seconds to extract preloaded images to volume
	W1107 23:22:05.982712  103523 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1107 23:22:05.982801  103523 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 23:22:06.034226  103523 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-542158-m02 --name multinode-542158-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-542158-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-542158-m02 --network multinode-542158 --ip 192.168.58.3 --volume multinode-542158-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:22:06.356159  103523 cli_runner.go:164] Run: docker container inspect multinode-542158-m02 --format={{.State.Running}}
	I1107 23:22:06.373174  103523 cli_runner.go:164] Run: docker container inspect multinode-542158-m02 --format={{.State.Status}}
	I1107 23:22:06.391647  103523 cli_runner.go:164] Run: docker exec multinode-542158-m02 stat /var/lib/dpkg/alternatives/iptables
	I1107 23:22:06.452533  103523 oci.go:144] the created container "multinode-542158-m02" has a running status.
	I1107 23:22:06.452573  103523 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158-m02/id_rsa...
	I1107 23:22:06.692933  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1107 23:22:06.692989  103523 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 23:22:06.714580  103523 cli_runner.go:164] Run: docker container inspect multinode-542158-m02 --format={{.State.Status}}
	I1107 23:22:06.737557  103523 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 23:22:06.737582  103523 kic_runner.go:114] Args: [docker exec --privileged multinode-542158-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 23:22:06.803103  103523 cli_runner.go:164] Run: docker container inspect multinode-542158-m02 --format={{.State.Status}}
	I1107 23:22:06.825565  103523 machine.go:88] provisioning docker machine ...
	I1107 23:22:06.825685  103523 ubuntu.go:169] provisioning hostname "multinode-542158-m02"
	I1107 23:22:06.825744  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158-m02
	I1107 23:22:06.845814  103523 main.go:141] libmachine: Using SSH client type: native
	I1107 23:22:06.846306  103523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1107 23:22:06.846327  103523 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-542158-m02 && echo "multinode-542158-m02" | sudo tee /etc/hostname
	I1107 23:22:07.062948  103523 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-542158-m02
	
	I1107 23:22:07.063021  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158-m02
	I1107 23:22:07.081236  103523 main.go:141] libmachine: Using SSH client type: native
	I1107 23:22:07.081662  103523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1107 23:22:07.081684  103523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-542158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-542158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-542158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:22:07.200178  103523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:22:07.200212  103523 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9432/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9432/.minikube}
	I1107 23:22:07.200239  103523 ubuntu.go:177] setting up certificates
	I1107 23:22:07.200257  103523 provision.go:83] configureAuth start
	I1107 23:22:07.200320  103523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-542158-m02
	I1107 23:22:07.216209  103523 provision.go:138] copyHostCerts
	I1107 23:22:07.216245  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem
	I1107 23:22:07.216280  103523 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem, removing ...
	I1107 23:22:07.216293  103523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem
	I1107 23:22:07.216362  103523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem (1078 bytes)
	I1107 23:22:07.216449  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem
	I1107 23:22:07.216468  103523 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem, removing ...
	I1107 23:22:07.216474  103523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem
	I1107 23:22:07.216513  103523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem (1123 bytes)
	I1107 23:22:07.216572  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem
	I1107 23:22:07.216595  103523 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem, removing ...
	I1107 23:22:07.216602  103523 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem
	I1107 23:22:07.216631  103523 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem (1675 bytes)
	I1107 23:22:07.216723  103523 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem org=jenkins.multinode-542158-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-542158-m02]
	I1107 23:22:07.555391  103523 provision.go:172] copyRemoteCerts
	I1107 23:22:07.555448  103523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:22:07.555494  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158-m02
	I1107 23:22:07.571683  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158-m02/id_rsa Username:docker}
	I1107 23:22:07.656874  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 23:22:07.656930  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:22:07.680430  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 23:22:07.680489  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1107 23:22:07.702615  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 23:22:07.702682  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1107 23:22:07.724492  103523 provision.go:86] duration metric: configureAuth took 524.217222ms
	I1107 23:22:07.724518  103523 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:22:07.724700  103523 config.go:182] Loaded profile config "multinode-542158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:22:07.724810  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158-m02
	I1107 23:22:07.742937  103523 main.go:141] libmachine: Using SSH client type: native
	I1107 23:22:07.743289  103523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I1107 23:22:07.743311  103523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:22:07.944650  103523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:22:07.944676  103523 machine.go:91] provisioned docker machine in 1.119010921s
	I1107 23:22:07.944686  103523 client.go:171] LocalClient.Create took 7.801353579s
	I1107 23:22:07.944703  103523 start.go:167] duration metric: libmachine.API.Create for "multinode-542158" took 7.801401245s
	I1107 23:22:07.944712  103523 start.go:300] post-start starting for "multinode-542158-m02" (driver="docker")
	I1107 23:22:07.944724  103523 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:22:07.944780  103523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:22:07.944818  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158-m02
	I1107 23:22:07.961083  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158-m02/id_rsa Username:docker}
	I1107 23:22:08.048848  103523 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:22:08.051858  103523 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1107 23:22:08.051883  103523 command_runner.go:130] > NAME="Ubuntu"
	I1107 23:22:08.051892  103523 command_runner.go:130] > VERSION_ID="22.04"
	I1107 23:22:08.051902  103523 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1107 23:22:08.051911  103523 command_runner.go:130] > VERSION_CODENAME=jammy
	I1107 23:22:08.051918  103523 command_runner.go:130] > ID=ubuntu
	I1107 23:22:08.051925  103523 command_runner.go:130] > ID_LIKE=debian
	I1107 23:22:08.051933  103523 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1107 23:22:08.051950  103523 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1107 23:22:08.051959  103523 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1107 23:22:08.051965  103523 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1107 23:22:08.051972  103523 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1107 23:22:08.052019  103523 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:22:08.052041  103523 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:22:08.052050  103523 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:22:08.052056  103523 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1107 23:22:08.052064  103523 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/addons for local assets ...
	I1107 23:22:08.052116  103523 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/files for local assets ...
	I1107 23:22:08.052178  103523 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem -> 162112.pem in /etc/ssl/certs
	I1107 23:22:08.052193  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem -> /etc/ssl/certs/162112.pem
	I1107 23:22:08.052266  103523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:22:08.060406  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem --> /etc/ssl/certs/162112.pem (1708 bytes)
	I1107 23:22:08.082127  103523 start.go:303] post-start completed in 137.398784ms
	I1107 23:22:08.082494  103523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-542158-m02
	I1107 23:22:08.099858  103523 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/config.json ...
	I1107 23:22:08.100126  103523 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:22:08.100189  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158-m02
	I1107 23:22:08.116101  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158-m02/id_rsa Username:docker}
	I1107 23:22:08.204283  103523 command_runner.go:130] > 27%!
	(MISSING)I1107 23:22:08.204462  103523 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:22:08.208767  103523 command_runner.go:130] > 214G
	I1107 23:22:08.208893  103523 start.go:128] duration metric: createHost completed in 8.068053567s
	I1107 23:22:08.208917  103523 start.go:83] releasing machines lock for "multinode-542158-m02", held for 8.068191118s
	I1107 23:22:08.208980  103523 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-542158-m02
	I1107 23:22:08.227877  103523 out.go:177] * Found network options:
	I1107 23:22:08.229600  103523 out.go:177]   - NO_PROXY=192.168.58.2
	W1107 23:22:08.231290  103523 proxy.go:119] fail to check proxy env: Error ip not in block
	W1107 23:22:08.231333  103523 proxy.go:119] fail to check proxy env: Error ip not in block
	I1107 23:22:08.231400  103523 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:22:08.231434  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158-m02
	I1107 23:22:08.231487  103523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:22:08.231548  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158-m02
	I1107 23:22:08.248033  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158-m02/id_rsa Username:docker}
	I1107 23:22:08.248962  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158-m02/id_rsa Username:docker}
	I1107 23:22:08.467294  103523 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1107 23:22:08.467371  103523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:22:08.471518  103523 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1107 23:22:08.471538  103523 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1107 23:22:08.471544  103523 command_runner.go:130] > Device: b0h/176d	Inode: 556991      Links: 1
	I1107 23:22:08.471552  103523 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:22:08.471561  103523 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1107 23:22:08.471585  103523 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1107 23:22:08.471597  103523 command_runner.go:130] > Change: 2023-11-07 23:01:51.763999730 +0000
	I1107 23:22:08.471605  103523 command_runner.go:130] >  Birth: 2023-11-07 23:01:51.763999730 +0000
	I1107 23:22:08.471694  103523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:22:08.489083  103523 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:22:08.489173  103523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:22:08.516849  103523 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1107 23:22:08.516899  103523 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1107 23:22:08.516907  103523 start.go:472] detecting cgroup driver to use...
	I1107 23:22:08.516934  103523 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:22:08.516983  103523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:22:08.531246  103523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:22:08.541741  103523 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:22:08.541803  103523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:22:08.554273  103523 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:22:08.567298  103523 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1107 23:22:08.647667  103523 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:22:08.661227  103523 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1107 23:22:08.728939  103523 docker.go:219] disabling docker service ...
	I1107 23:22:08.729003  103523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:22:08.746900  103523 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:22:08.757857  103523 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:22:08.841882  103523 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1107 23:22:08.841950  103523 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:22:08.925127  103523 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1107 23:22:08.925200  103523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:22:08.935800  103523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:22:08.949785  103523 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1107 23:22:08.950474  103523 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1107 23:22:08.950537  103523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:22:08.959269  103523 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1107 23:22:08.959336  103523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:22:08.968074  103523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:22:08.976422  103523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:22:08.985580  103523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 23:22:08.994527  103523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 23:22:09.001709  103523 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1107 23:22:09.002395  103523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 23:22:09.010320  103523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 23:22:09.083965  103523 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1107 23:22:09.190533  103523 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1107 23:22:09.190601  103523 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1107 23:22:09.194051  103523 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1107 23:22:09.194082  103523 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1107 23:22:09.194090  103523 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1107 23:22:09.194098  103523 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:22:09.194104  103523 command_runner.go:130] > Access: 2023-11-07 23:22:09.178864367 +0000
	I1107 23:22:09.194112  103523 command_runner.go:130] > Modify: 2023-11-07 23:22:09.178864367 +0000
	I1107 23:22:09.194120  103523 command_runner.go:130] > Change: 2023-11-07 23:22:09.178864367 +0000
	I1107 23:22:09.194124  103523 command_runner.go:130] >  Birth: -
	I1107 23:22:09.194147  103523 start.go:540] Will wait 60s for crictl version
	I1107 23:22:09.194196  103523 ssh_runner.go:195] Run: which crictl
	I1107 23:22:09.197595  103523 command_runner.go:130] > /usr/bin/crictl
	I1107 23:22:09.197662  103523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1107 23:22:09.228064  103523 command_runner.go:130] > Version:  0.1.0
	I1107 23:22:09.228090  103523 command_runner.go:130] > RuntimeName:  cri-o
	I1107 23:22:09.228097  103523 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1107 23:22:09.228104  103523 command_runner.go:130] > RuntimeApiVersion:  v1
	I1107 23:22:09.230183  103523 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1107 23:22:09.230249  103523 ssh_runner.go:195] Run: crio --version
	I1107 23:22:09.261277  103523 command_runner.go:130] > crio version 1.24.6
	I1107 23:22:09.261304  103523 command_runner.go:130] > Version:          1.24.6
	I1107 23:22:09.261323  103523 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1107 23:22:09.261342  103523 command_runner.go:130] > GitTreeState:     clean
	I1107 23:22:09.261351  103523 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1107 23:22:09.261356  103523 command_runner.go:130] > GoVersion:        go1.18.2
	I1107 23:22:09.261362  103523 command_runner.go:130] > Compiler:         gc
	I1107 23:22:09.261372  103523 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:22:09.261382  103523 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:22:09.261399  103523 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:22:09.261409  103523 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:22:09.261420  103523 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:22:09.262558  103523 ssh_runner.go:195] Run: crio --version
	I1107 23:22:09.295493  103523 command_runner.go:130] > crio version 1.24.6
	I1107 23:22:09.295513  103523 command_runner.go:130] > Version:          1.24.6
	I1107 23:22:09.295520  103523 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1107 23:22:09.295525  103523 command_runner.go:130] > GitTreeState:     clean
	I1107 23:22:09.295541  103523 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1107 23:22:09.295549  103523 command_runner.go:130] > GoVersion:        go1.18.2
	I1107 23:22:09.295555  103523 command_runner.go:130] > Compiler:         gc
	I1107 23:22:09.295565  103523 command_runner.go:130] > Platform:         linux/amd64
	I1107 23:22:09.295576  103523 command_runner.go:130] > Linkmode:         dynamic
	I1107 23:22:09.295592  103523 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1107 23:22:09.295602  103523 command_runner.go:130] > SeccompEnabled:   true
	I1107 23:22:09.295609  103523 command_runner.go:130] > AppArmorEnabled:  false
	I1107 23:22:09.300099  103523 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1107 23:22:09.301913  103523 out.go:177]   - env NO_PROXY=192.168.58.2
	I1107 23:22:09.303533  103523 cli_runner.go:164] Run: docker network inspect multinode-542158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 23:22:09.320570  103523 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1107 23:22:09.324147  103523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:22:09.334209  103523 certs.go:56] Setting up /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158 for IP: 192.168.58.3
	I1107 23:22:09.334246  103523 certs.go:190] acquiring lock for shared ca certs: {Name:mkbe2c97e30f744ec2581d086567acaa8822f7ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:22:09.334394  103523 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key
	I1107 23:22:09.334430  103523 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key
	I1107 23:22:09.334443  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 23:22:09.334460  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 23:22:09.334472  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 23:22:09.334483  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 23:22:09.334539  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211.pem (1338 bytes)
	W1107 23:22:09.334578  103523 certs.go:433] ignoring /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211_empty.pem, impossibly tiny 0 bytes
	I1107 23:22:09.334593  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem (1675 bytes)
	I1107 23:22:09.334621  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem (1078 bytes)
	I1107 23:22:09.334649  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem (1123 bytes)
	I1107 23:22:09.334679  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem (1675 bytes)
	I1107 23:22:09.334718  103523 certs.go:437] found cert: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem (1708 bytes)
	I1107 23:22:09.334743  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem -> /usr/share/ca-certificates/162112.pem
	I1107 23:22:09.334756  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:22:09.334768  103523 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211.pem -> /usr/share/ca-certificates/16211.pem
	I1107 23:22:09.335089  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 23:22:09.356655  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1107 23:22:09.377999  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 23:22:09.399058  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1107 23:22:09.420950  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem --> /usr/share/ca-certificates/162112.pem (1708 bytes)
	I1107 23:22:09.443081  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 23:22:09.465388  103523 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/certs/16211.pem --> /usr/share/ca-certificates/16211.pem (1338 bytes)
	I1107 23:22:09.487328  103523 ssh_runner.go:195] Run: openssl version
	I1107 23:22:09.492166  103523 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1107 23:22:09.492250  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 23:22:09.500518  103523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:22:09.503793  103523 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:22:09.503832  103523 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:22:09.503872  103523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 23:22:09.509964  103523 command_runner.go:130] > b5213941
	I1107 23:22:09.510160  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 23:22:09.518696  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16211.pem && ln -fs /usr/share/ca-certificates/16211.pem /etc/ssl/certs/16211.pem"
	I1107 23:22:09.527484  103523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16211.pem
	I1107 23:22:09.530767  103523 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  7 23:08 /usr/share/ca-certificates/16211.pem
	I1107 23:22:09.530798  103523 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:08 /usr/share/ca-certificates/16211.pem
	I1107 23:22:09.530831  103523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16211.pem
	I1107 23:22:09.536984  103523 command_runner.go:130] > 51391683
	I1107 23:22:09.537106  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16211.pem /etc/ssl/certs/51391683.0"
	I1107 23:22:09.545916  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162112.pem && ln -fs /usr/share/ca-certificates/162112.pem /etc/ssl/certs/162112.pem"
	I1107 23:22:09.554474  103523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162112.pem
	I1107 23:22:09.557561  103523 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  7 23:08 /usr/share/ca-certificates/162112.pem
	I1107 23:22:09.557596  103523 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:08 /usr/share/ca-certificates/162112.pem
	I1107 23:22:09.557663  103523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162112.pem
	I1107 23:22:09.563832  103523 command_runner.go:130] > 3ec20f2e
	I1107 23:22:09.563917  103523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162112.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 23:22:09.572680  103523 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 23:22:09.575654  103523 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:22:09.575695  103523 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 23:22:09.575817  103523 ssh_runner.go:195] Run: crio config
	I1107 23:22:09.613037  103523 command_runner.go:130] ! time="2023-11-07 23:22:09.612566449Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1107 23:22:09.613070  103523 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1107 23:22:09.618213  103523 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1107 23:22:09.618239  103523 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1107 23:22:09.618246  103523 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1107 23:22:09.618252  103523 command_runner.go:130] > #
	I1107 23:22:09.618263  103523 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1107 23:22:09.618274  103523 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1107 23:22:09.618284  103523 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1107 23:22:09.618293  103523 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1107 23:22:09.618300  103523 command_runner.go:130] > # reload'.
	I1107 23:22:09.618306  103523 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1107 23:22:09.618315  103523 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1107 23:22:09.618330  103523 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1107 23:22:09.618341  103523 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1107 23:22:09.618350  103523 command_runner.go:130] > [crio]
	I1107 23:22:09.618364  103523 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1107 23:22:09.618376  103523 command_runner.go:130] > # containers images, in this directory.
	I1107 23:22:09.618395  103523 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1107 23:22:09.618405  103523 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1107 23:22:09.618410  103523 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1107 23:22:09.618419  103523 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1107 23:22:09.618429  103523 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1107 23:22:09.618440  103523 command_runner.go:130] > # storage_driver = "vfs"
	I1107 23:22:09.618454  103523 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1107 23:22:09.618467  103523 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1107 23:22:09.618478  103523 command_runner.go:130] > # storage_option = [
	I1107 23:22:09.618484  103523 command_runner.go:130] > # ]
	I1107 23:22:09.618496  103523 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1107 23:22:09.618506  103523 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1107 23:22:09.618511  103523 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1107 23:22:09.618526  103523 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1107 23:22:09.618540  103523 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1107 23:22:09.618551  103523 command_runner.go:130] > # always happen on a node reboot
	I1107 23:22:09.618562  103523 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1107 23:22:09.618575  103523 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1107 23:22:09.618593  103523 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1107 23:22:09.618613  103523 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1107 23:22:09.618626  103523 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1107 23:22:09.618642  103523 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1107 23:22:09.618658  103523 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1107 23:22:09.618668  103523 command_runner.go:130] > # internal_wipe = true
	I1107 23:22:09.618680  103523 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1107 23:22:09.618688  103523 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1107 23:22:09.618697  103523 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1107 23:22:09.618710  103523 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1107 23:22:09.618749  103523 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1107 23:22:09.618756  103523 command_runner.go:130] > [crio.api]
	I1107 23:22:09.618767  103523 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1107 23:22:09.618780  103523 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1107 23:22:09.618792  103523 command_runner.go:130] > # IP address on which the stream server will listen.
	I1107 23:22:09.618803  103523 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1107 23:22:09.618817  103523 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1107 23:22:09.618826  103523 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1107 23:22:09.618833  103523 command_runner.go:130] > # stream_port = "0"
	I1107 23:22:09.618842  103523 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1107 23:22:09.618852  103523 command_runner.go:130] > # stream_enable_tls = false
	I1107 23:22:09.618861  103523 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1107 23:22:09.618871  103523 command_runner.go:130] > # stream_idle_timeout = ""
	I1107 23:22:09.618885  103523 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1107 23:22:09.618899  103523 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1107 23:22:09.618908  103523 command_runner.go:130] > # minutes.
	I1107 23:22:09.618918  103523 command_runner.go:130] > # stream_tls_cert = ""
	I1107 23:22:09.618932  103523 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1107 23:22:09.618943  103523 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1107 23:22:09.618951  103523 command_runner.go:130] > # stream_tls_key = ""
	I1107 23:22:09.618961  103523 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1107 23:22:09.618979  103523 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1107 23:22:09.618991  103523 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1107 23:22:09.619001  103523 command_runner.go:130] > # stream_tls_ca = ""
	I1107 23:22:09.619016  103523 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:22:09.619026  103523 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1107 23:22:09.619038  103523 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1107 23:22:09.619048  103523 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1107 23:22:09.619086  103523 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1107 23:22:09.619099  103523 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1107 23:22:09.619108  103523 command_runner.go:130] > [crio.runtime]
	I1107 23:22:09.619118  103523 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1107 23:22:09.619129  103523 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1107 23:22:09.619139  103523 command_runner.go:130] > # "nofile=1024:2048"
	I1107 23:22:09.619152  103523 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1107 23:22:09.619163  103523 command_runner.go:130] > # default_ulimits = [
	I1107 23:22:09.619172  103523 command_runner.go:130] > # ]
	I1107 23:22:09.619185  103523 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1107 23:22:09.619195  103523 command_runner.go:130] > # no_pivot = false
	I1107 23:22:09.619208  103523 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1107 23:22:09.619221  103523 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1107 23:22:09.619233  103523 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1107 23:22:09.619245  103523 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1107 23:22:09.619256  103523 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1107 23:22:09.619270  103523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:22:09.619280  103523 command_runner.go:130] > # conmon = ""
	I1107 23:22:09.619289  103523 command_runner.go:130] > # Cgroup setting for conmon
	I1107 23:22:09.619299  103523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1107 23:22:09.619310  103523 command_runner.go:130] > conmon_cgroup = "pod"
	I1107 23:22:09.619324  103523 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1107 23:22:09.619335  103523 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1107 23:22:09.619349  103523 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1107 23:22:09.619359  103523 command_runner.go:130] > # conmon_env = [
	I1107 23:22:09.619366  103523 command_runner.go:130] > # ]
	I1107 23:22:09.619381  103523 command_runner.go:130] > # Additional environment variables to set for all the
	I1107 23:22:09.619392  103523 command_runner.go:130] > # containers. These are overridden if set in the
	I1107 23:22:09.619405  103523 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1107 23:22:09.619418  103523 command_runner.go:130] > # default_env = [
	I1107 23:22:09.619427  103523 command_runner.go:130] > # ]
	I1107 23:22:09.619440  103523 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1107 23:22:09.619450  103523 command_runner.go:130] > # selinux = false
	I1107 23:22:09.619460  103523 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1107 23:22:09.619469  103523 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1107 23:22:09.619481  103523 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1107 23:22:09.619492  103523 command_runner.go:130] > # seccomp_profile = ""
	I1107 23:22:09.619505  103523 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1107 23:22:09.619518  103523 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1107 23:22:09.619531  103523 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1107 23:22:09.619542  103523 command_runner.go:130] > # which might increase security.
	I1107 23:22:09.619550  103523 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1107 23:22:09.619562  103523 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1107 23:22:09.619581  103523 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1107 23:22:09.619595  103523 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1107 23:22:09.619609  103523 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1107 23:22:09.619620  103523 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:22:09.619634  103523 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1107 23:22:09.619643  103523 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1107 23:22:09.619653  103523 command_runner.go:130] > # the cgroup blockio controller.
	I1107 23:22:09.619664  103523 command_runner.go:130] > # blockio_config_file = ""
	I1107 23:22:09.619678  103523 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1107 23:22:09.619688  103523 command_runner.go:130] > # irqbalance daemon.
	I1107 23:22:09.619701  103523 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1107 23:22:09.619719  103523 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1107 23:22:09.619727  103523 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:22:09.619737  103523 command_runner.go:130] > # rdt_config_file = ""
	I1107 23:22:09.619750  103523 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1107 23:22:09.619783  103523 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1107 23:22:09.619795  103523 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1107 23:22:09.619803  103523 command_runner.go:130] > # separate_pull_cgroup = ""
	I1107 23:22:09.619818  103523 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1107 23:22:09.619829  103523 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1107 23:22:09.619836  103523 command_runner.go:130] > # will be added.
	I1107 23:22:09.619843  103523 command_runner.go:130] > # default_capabilities = [
	I1107 23:22:09.619857  103523 command_runner.go:130] > # 	"CHOWN",
	I1107 23:22:09.619872  103523 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1107 23:22:09.619879  103523 command_runner.go:130] > # 	"FSETID",
	I1107 23:22:09.619885  103523 command_runner.go:130] > # 	"FOWNER",
	I1107 23:22:09.619893  103523 command_runner.go:130] > # 	"SETGID",
	I1107 23:22:09.619897  103523 command_runner.go:130] > # 	"SETUID",
	I1107 23:22:09.619923  103523 command_runner.go:130] > # 	"SETPCAP",
	I1107 23:22:09.619940  103523 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1107 23:22:09.619946  103523 command_runner.go:130] > # 	"KILL",
	I1107 23:22:09.619953  103523 command_runner.go:130] > # ]
	I1107 23:22:09.619966  103523 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1107 23:22:09.619978  103523 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1107 23:22:09.619986  103523 command_runner.go:130] > # add_inheritable_capabilities = true
	I1107 23:22:09.619996  103523 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1107 23:22:09.620010  103523 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:22:09.620021  103523 command_runner.go:130] > # default_sysctls = [
	I1107 23:22:09.620027  103523 command_runner.go:130] > # ]
	I1107 23:22:09.620035  103523 command_runner.go:130] > # List of devices on the host that a
	I1107 23:22:09.620052  103523 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1107 23:22:09.620062  103523 command_runner.go:130] > # allowed_devices = [
	I1107 23:22:09.620067  103523 command_runner.go:130] > # 	"/dev/fuse",
	I1107 23:22:09.620073  103523 command_runner.go:130] > # ]
	I1107 23:22:09.620081  103523 command_runner.go:130] > # List of additional devices. specified as
	I1107 23:22:09.620128  103523 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1107 23:22:09.620142  103523 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1107 23:22:09.620151  103523 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1107 23:22:09.620156  103523 command_runner.go:130] > # additional_devices = [
	I1107 23:22:09.620159  103523 command_runner.go:130] > # ]
	I1107 23:22:09.620167  103523 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1107 23:22:09.620178  103523 command_runner.go:130] > # cdi_spec_dirs = [
	I1107 23:22:09.620185  103523 command_runner.go:130] > # 	"/etc/cdi",
	I1107 23:22:09.620195  103523 command_runner.go:130] > # 	"/var/run/cdi",
	I1107 23:22:09.620201  103523 command_runner.go:130] > # ]
	I1107 23:22:09.620214  103523 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1107 23:22:09.620227  103523 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1107 23:22:09.620237  103523 command_runner.go:130] > # Defaults to false.
	I1107 23:22:09.620245  103523 command_runner.go:130] > # device_ownership_from_security_context = false
	I1107 23:22:09.620258  103523 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1107 23:22:09.620270  103523 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1107 23:22:09.620280  103523 command_runner.go:130] > # hooks_dir = [
	I1107 23:22:09.620288  103523 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1107 23:22:09.620297  103523 command_runner.go:130] > # ]
	I1107 23:22:09.620307  103523 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1107 23:22:09.620321  103523 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1107 23:22:09.620330  103523 command_runner.go:130] > # its default mounts from the following two files:
	I1107 23:22:09.620333  103523 command_runner.go:130] > #
	I1107 23:22:09.620346  103523 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1107 23:22:09.620360  103523 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1107 23:22:09.620370  103523 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1107 23:22:09.620379  103523 command_runner.go:130] > #
	I1107 23:22:09.620390  103523 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1107 23:22:09.620403  103523 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1107 23:22:09.620414  103523 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1107 23:22:09.620420  103523 command_runner.go:130] > #      only add mounts it finds in this file.
	I1107 23:22:09.620431  103523 command_runner.go:130] > #
	I1107 23:22:09.620443  103523 command_runner.go:130] > # default_mounts_file = ""
	I1107 23:22:09.620452  103523 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1107 23:22:09.620467  103523 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1107 23:22:09.620474  103523 command_runner.go:130] > # pids_limit = 0
	I1107 23:22:09.620488  103523 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1107 23:22:09.620499  103523 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1107 23:22:09.620506  103523 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1107 23:22:09.620522  103523 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1107 23:22:09.620532  103523 command_runner.go:130] > # log_size_max = -1
	I1107 23:22:09.620544  103523 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1107 23:22:09.620554  103523 command_runner.go:130] > # log_to_journald = false
	I1107 23:22:09.620564  103523 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1107 23:22:09.620576  103523 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1107 23:22:09.620585  103523 command_runner.go:130] > # Path to directory for container attach sockets.
	I1107 23:22:09.620593  103523 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1107 23:22:09.620601  103523 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1107 23:22:09.620612  103523 command_runner.go:130] > # bind_mount_prefix = ""
	I1107 23:22:09.620625  103523 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1107 23:22:09.620635  103523 command_runner.go:130] > # read_only = false
	I1107 23:22:09.620646  103523 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1107 23:22:09.620659  103523 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1107 23:22:09.620669  103523 command_runner.go:130] > # live configuration reload.
	I1107 23:22:09.620674  103523 command_runner.go:130] > # log_level = "info"
	I1107 23:22:09.620683  103523 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1107 23:22:09.620692  103523 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:22:09.620702  103523 command_runner.go:130] > # log_filter = ""
	I1107 23:22:09.620718  103523 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1107 23:22:09.620731  103523 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1107 23:22:09.620742  103523 command_runner.go:130] > # separated by comma.
	I1107 23:22:09.620749  103523 command_runner.go:130] > # uid_mappings = ""
	I1107 23:22:09.620760  103523 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1107 23:22:09.620766  103523 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1107 23:22:09.620775  103523 command_runner.go:130] > # separated by comma.
	I1107 23:22:09.620783  103523 command_runner.go:130] > # gid_mappings = ""
	I1107 23:22:09.620797  103523 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1107 23:22:09.620824  103523 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:22:09.620837  103523 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:22:09.620846  103523 command_runner.go:130] > # minimum_mappable_uid = -1
	I1107 23:22:09.620853  103523 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1107 23:22:09.620866  103523 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1107 23:22:09.620877  103523 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1107 23:22:09.620899  103523 command_runner.go:130] > # minimum_mappable_gid = -1
	I1107 23:22:09.620912  103523 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1107 23:22:09.620925  103523 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1107 23:22:09.620935  103523 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1107 23:22:09.620939  103523 command_runner.go:130] > # ctr_stop_timeout = 30
	I1107 23:22:09.620951  103523 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1107 23:22:09.620970  103523 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1107 23:22:09.620982  103523 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1107 23:22:09.620991  103523 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1107 23:22:09.621001  103523 command_runner.go:130] > # drop_infra_ctr = true
	I1107 23:22:09.621011  103523 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1107 23:22:09.621022  103523 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1107 23:22:09.621033  103523 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1107 23:22:09.621044  103523 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1107 23:22:09.621055  103523 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1107 23:22:09.621067  103523 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1107 23:22:09.621075  103523 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1107 23:22:09.621093  103523 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1107 23:22:09.621103  103523 command_runner.go:130] > # pinns_path = ""
	I1107 23:22:09.621110  103523 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1107 23:22:09.621122  103523 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1107 23:22:09.621135  103523 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1107 23:22:09.621146  103523 command_runner.go:130] > # default_runtime = "runc"
	I1107 23:22:09.621158  103523 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1107 23:22:09.621176  103523 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1107 23:22:09.621193  103523 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1107 23:22:09.621201  103523 command_runner.go:130] > # creation as a file is not desired either.
	I1107 23:22:09.621214  103523 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1107 23:22:09.621227  103523 command_runner.go:130] > # the hostname is being managed dynamically.
	I1107 23:22:09.621239  103523 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1107 23:22:09.621249  103523 command_runner.go:130] > # ]
	I1107 23:22:09.621262  103523 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1107 23:22:09.621275  103523 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1107 23:22:09.621284  103523 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1107 23:22:09.621294  103523 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1107 23:22:09.621299  103523 command_runner.go:130] > #
	I1107 23:22:09.621311  103523 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1107 23:22:09.621322  103523 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1107 23:22:09.621349  103523 command_runner.go:130] > #  runtime_type = "oci"
	I1107 23:22:09.621361  103523 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1107 23:22:09.621371  103523 command_runner.go:130] > #  privileged_without_host_devices = false
	I1107 23:22:09.621375  103523 command_runner.go:130] > #  allowed_annotations = []
	I1107 23:22:09.621380  103523 command_runner.go:130] > # Where:
	I1107 23:22:09.621392  103523 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1107 23:22:09.621406  103523 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1107 23:22:09.621423  103523 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1107 23:22:09.621436  103523 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1107 23:22:09.621446  103523 command_runner.go:130] > #   in $PATH.
	I1107 23:22:09.621458  103523 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1107 23:22:09.621467  103523 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1107 23:22:09.621477  103523 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1107 23:22:09.621487  103523 command_runner.go:130] > #   state.
	I1107 23:22:09.621498  103523 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1107 23:22:09.621511  103523 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1107 23:22:09.621523  103523 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1107 23:22:09.621540  103523 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1107 23:22:09.621550  103523 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1107 23:22:09.621559  103523 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1107 23:22:09.621571  103523 command_runner.go:130] > #   The currently recognized values are:
	I1107 23:22:09.621582  103523 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1107 23:22:09.621597  103523 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1107 23:22:09.621611  103523 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1107 23:22:09.621622  103523 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1107 23:22:09.621634  103523 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1107 23:22:09.621647  103523 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1107 23:22:09.621658  103523 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1107 23:22:09.621680  103523 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1107 23:22:09.621692  103523 command_runner.go:130] > #   should be moved to the container's cgroup
	I1107 23:22:09.621699  103523 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1107 23:22:09.621716  103523 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1107 23:22:09.621725  103523 command_runner.go:130] > runtime_type = "oci"
	I1107 23:22:09.621729  103523 command_runner.go:130] > runtime_root = "/run/runc"
	I1107 23:22:09.621738  103523 command_runner.go:130] > runtime_config_path = ""
	I1107 23:22:09.621749  103523 command_runner.go:130] > monitor_path = ""
	I1107 23:22:09.621757  103523 command_runner.go:130] > monitor_cgroup = ""
	I1107 23:22:09.621767  103523 command_runner.go:130] > monitor_exec_cgroup = ""
	I1107 23:22:09.621839  103523 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1107 23:22:09.621851  103523 command_runner.go:130] > # running containers
	I1107 23:22:09.621858  103523 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1107 23:22:09.621869  103523 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1107 23:22:09.621883  103523 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1107 23:22:09.621896  103523 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1107 23:22:09.621904  103523 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1107 23:22:09.621912  103523 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1107 23:22:09.621926  103523 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1107 23:22:09.621937  103523 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1107 23:22:09.621948  103523 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1107 23:22:09.621956  103523 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1107 23:22:09.621970  103523 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1107 23:22:09.621979  103523 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1107 23:22:09.621989  103523 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1107 23:22:09.622000  103523 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1107 23:22:09.622016  103523 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1107 23:22:09.622029  103523 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1107 23:22:09.622047  103523 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1107 23:22:09.622063  103523 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1107 23:22:09.622072  103523 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1107 23:22:09.622081  103523 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1107 23:22:09.622087  103523 command_runner.go:130] > # Example:
	I1107 23:22:09.622099  103523 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1107 23:22:09.622110  103523 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1107 23:22:09.622122  103523 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1107 23:22:09.622137  103523 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1107 23:22:09.622147  103523 command_runner.go:130] > # cpuset = 0
	I1107 23:22:09.622153  103523 command_runner.go:130] > # cpushares = "0-1"
	I1107 23:22:09.622160  103523 command_runner.go:130] > # Where:
	I1107 23:22:09.622166  103523 command_runner.go:130] > # The workload name is workload-type.
	I1107 23:22:09.622180  103523 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1107 23:22:09.622194  103523 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1107 23:22:09.622203  103523 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1107 23:22:09.622219  103523 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1107 23:22:09.622232  103523 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1107 23:22:09.622240  103523 command_runner.go:130] > # 
	I1107 23:22:09.622246  103523 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1107 23:22:09.622252  103523 command_runner.go:130] > #
	I1107 23:22:09.622268  103523 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1107 23:22:09.622283  103523 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1107 23:22:09.622298  103523 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1107 23:22:09.622312  103523 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1107 23:22:09.622323  103523 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1107 23:22:09.622336  103523 command_runner.go:130] > [crio.image]
	I1107 23:22:09.622348  103523 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1107 23:22:09.622359  103523 command_runner.go:130] > # default_transport = "docker://"
	I1107 23:22:09.622373  103523 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1107 23:22:09.622387  103523 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:22:09.622394  103523 command_runner.go:130] > # global_auth_file = ""
	I1107 23:22:09.622406  103523 command_runner.go:130] > # The image used to instantiate infra containers.
	I1107 23:22:09.622414  103523 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:22:09.622423  103523 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1107 23:22:09.622430  103523 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1107 23:22:09.622443  103523 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1107 23:22:09.622455  103523 command_runner.go:130] > # This option supports live configuration reload.
	I1107 23:22:09.622463  103523 command_runner.go:130] > # pause_image_auth_file = ""
	I1107 23:22:09.622475  103523 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1107 23:22:09.622486  103523 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1107 23:22:09.622499  103523 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1107 23:22:09.622509  103523 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1107 23:22:09.622515  103523 command_runner.go:130] > # pause_command = "/pause"
	I1107 23:22:09.622526  103523 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1107 23:22:09.622535  103523 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1107 23:22:09.622546  103523 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1107 23:22:09.622560  103523 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1107 23:22:09.622573  103523 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1107 23:22:09.622580  103523 command_runner.go:130] > # signature_policy = ""
	I1107 23:22:09.622599  103523 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1107 23:22:09.622612  103523 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1107 23:22:09.622616  103523 command_runner.go:130] > # changing them here.
	I1107 23:22:09.622620  103523 command_runner.go:130] > # insecure_registries = [
	I1107 23:22:09.622624  103523 command_runner.go:130] > # ]
	I1107 23:22:09.622630  103523 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1107 23:22:09.622636  103523 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1107 23:22:09.622646  103523 command_runner.go:130] > # image_volumes = "mkdir"
	I1107 23:22:09.622653  103523 command_runner.go:130] > # Temporary directory to use for storing big files
	I1107 23:22:09.622658  103523 command_runner.go:130] > # big_files_temporary_dir = ""
	I1107 23:22:09.622666  103523 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1107 23:22:09.622674  103523 command_runner.go:130] > # CNI plugins.
	I1107 23:22:09.622683  103523 command_runner.go:130] > [crio.network]
	I1107 23:22:09.622693  103523 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1107 23:22:09.622707  103523 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1107 23:22:09.622722  103523 command_runner.go:130] > # cni_default_network = ""
	I1107 23:22:09.622735  103523 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1107 23:22:09.622746  103523 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1107 23:22:09.622759  103523 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1107 23:22:09.622765  103523 command_runner.go:130] > # plugin_dirs = [
	I1107 23:22:09.622770  103523 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1107 23:22:09.622776  103523 command_runner.go:130] > # ]
	I1107 23:22:09.622786  103523 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1107 23:22:09.622790  103523 command_runner.go:130] > [crio.metrics]
	I1107 23:22:09.622797  103523 command_runner.go:130] > # Globally enable or disable metrics support.
	I1107 23:22:09.622802  103523 command_runner.go:130] > # enable_metrics = false
	I1107 23:22:09.622809  103523 command_runner.go:130] > # Specify enabled metrics collectors.
	I1107 23:22:09.622814  103523 command_runner.go:130] > # Per default all metrics are enabled.
	I1107 23:22:09.622823  103523 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1107 23:22:09.622829  103523 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1107 23:22:09.622838  103523 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1107 23:22:09.622845  103523 command_runner.go:130] > # metrics_collectors = [
	I1107 23:22:09.622849  103523 command_runner.go:130] > # 	"operations",
	I1107 23:22:09.622854  103523 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1107 23:22:09.622861  103523 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1107 23:22:09.622865  103523 command_runner.go:130] > # 	"operations_errors",
	I1107 23:22:09.622872  103523 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1107 23:22:09.622876  103523 command_runner.go:130] > # 	"image_pulls_by_name",
	I1107 23:22:09.622883  103523 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1107 23:22:09.622887  103523 command_runner.go:130] > # 	"image_pulls_failures",
	I1107 23:22:09.622891  103523 command_runner.go:130] > # 	"image_pulls_successes",
	I1107 23:22:09.622896  103523 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1107 23:22:09.622903  103523 command_runner.go:130] > # 	"image_layer_reuse",
	I1107 23:22:09.622907  103523 command_runner.go:130] > # 	"containers_oom_total",
	I1107 23:22:09.622917  103523 command_runner.go:130] > # 	"containers_oom",
	I1107 23:22:09.622924  103523 command_runner.go:130] > # 	"processes_defunct",
	I1107 23:22:09.622935  103523 command_runner.go:130] > # 	"operations_total",
	I1107 23:22:09.622941  103523 command_runner.go:130] > # 	"operations_latency_seconds",
	I1107 23:22:09.622952  103523 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1107 23:22:09.622956  103523 command_runner.go:130] > # 	"operations_errors_total",
	I1107 23:22:09.622963  103523 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1107 23:22:09.622968  103523 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1107 23:22:09.622972  103523 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1107 23:22:09.622979  103523 command_runner.go:130] > # 	"image_pulls_success_total",
	I1107 23:22:09.622984  103523 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1107 23:22:09.622991  103523 command_runner.go:130] > # 	"containers_oom_count_total",
	I1107 23:22:09.622994  103523 command_runner.go:130] > # ]
	I1107 23:22:09.623000  103523 command_runner.go:130] > # The port on which the metrics server will listen.
	I1107 23:22:09.623006  103523 command_runner.go:130] > # metrics_port = 9090
	I1107 23:22:09.623011  103523 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1107 23:22:09.623015  103523 command_runner.go:130] > # metrics_socket = ""
	I1107 23:22:09.623021  103523 command_runner.go:130] > # The certificate for the secure metrics server.
	I1107 23:22:09.623029  103523 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1107 23:22:09.623035  103523 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1107 23:22:09.623042  103523 command_runner.go:130] > # certificate on any modification event.
	I1107 23:22:09.623046  103523 command_runner.go:130] > # metrics_cert = ""
	I1107 23:22:09.623055  103523 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1107 23:22:09.623062  103523 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1107 23:22:09.623066  103523 command_runner.go:130] > # metrics_key = ""
	I1107 23:22:09.623071  103523 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1107 23:22:09.623078  103523 command_runner.go:130] > [crio.tracing]
	I1107 23:22:09.623083  103523 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1107 23:22:09.623087  103523 command_runner.go:130] > # enable_tracing = false
	I1107 23:22:09.623092  103523 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1107 23:22:09.623097  103523 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1107 23:22:09.623102  103523 command_runner.go:130] > # Number of samples to collect per million spans.
	I1107 23:22:09.623109  103523 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1107 23:22:09.623115  103523 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1107 23:22:09.623122  103523 command_runner.go:130] > [crio.stats]
	I1107 23:22:09.623127  103523 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1107 23:22:09.623133  103523 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1107 23:22:09.623137  103523 command_runner.go:130] > # stats_collection_period = 0
	I1107 23:22:09.623257  103523 cni.go:84] Creating CNI manager for ""
	I1107 23:22:09.623270  103523 cni.go:136] 2 nodes found, recommending kindnet
	I1107 23:22:09.623281  103523 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 23:22:09.623306  103523 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-542158 NodeName:multinode-542158-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1107 23:22:09.623438  103523 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-542158-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 23:22:09.623490  103523 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-542158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-542158 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 23:22:09.623542  103523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1107 23:22:09.631540  103523 command_runner.go:130] > kubeadm
	I1107 23:22:09.631558  103523 command_runner.go:130] > kubectl
	I1107 23:22:09.631562  103523 command_runner.go:130] > kubelet
	I1107 23:22:09.632179  103523 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 23:22:09.632246  103523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1107 23:22:09.639871  103523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1107 23:22:09.655573  103523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1107 23:22:09.671796  103523 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1107 23:22:09.675016  103523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 23:22:09.684884  103523 host.go:66] Checking if "multinode-542158" exists ...
	I1107 23:22:09.685080  103523 config.go:182] Loaded profile config "multinode-542158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:22:09.685098  103523 start.go:304] JoinCluster: &{Name:multinode-542158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-542158 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:22:09.685180  103523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1107 23:22:09.685227  103523 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:22:09.701794  103523 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa Username:docker}
	I1107 23:22:09.845825  103523 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fsw7c0.76mwpk35s7fexnef --discovery-token-ca-cert-hash sha256:8a705d15a45dc72008d892e2cff618f0dc1a1c4f33be51e629a7afa6d45ac282 
	I1107 23:22:09.845881  103523 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:22:09.845912  103523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fsw7c0.76mwpk35s7fexnef --discovery-token-ca-cert-hash sha256:8a705d15a45dc72008d892e2cff618f0dc1a1c4f33be51e629a7afa6d45ac282 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-542158-m02"
	I1107 23:22:09.880225  103523 command_runner.go:130] > [preflight] Running pre-flight checks
	I1107 23:22:09.909263  103523 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1107 23:22:09.909285  103523 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1046-gcp
	I1107 23:22:09.909292  103523 command_runner.go:130] > OS: Linux
	I1107 23:22:09.909300  103523 command_runner.go:130] > CGROUPS_CPU: enabled
	I1107 23:22:09.909316  103523 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1107 23:22:09.909324  103523 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1107 23:22:09.909336  103523 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1107 23:22:09.909345  103523 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1107 23:22:09.909357  103523 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1107 23:22:09.909370  103523 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1107 23:22:09.909381  103523 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1107 23:22:09.909392  103523 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1107 23:22:09.991723  103523 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1107 23:22:09.991775  103523 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1107 23:22:10.018669  103523 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 23:22:10.018702  103523 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 23:22:10.018713  103523 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1107 23:22:10.094692  103523 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1107 23:22:12.609785  103523 command_runner.go:130] > This node has joined the cluster:
	I1107 23:22:12.609814  103523 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1107 23:22:12.609824  103523 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1107 23:22:12.609834  103523 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1107 23:22:12.612490  103523 command_runner.go:130] ! W1107 23:22:09.879798    1104 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1107 23:22:12.612515  103523 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1046-gcp\n", err: exit status 1
	I1107 23:22:12.612528  103523 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 23:22:12.612546  103523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fsw7c0.76mwpk35s7fexnef --discovery-token-ca-cert-hash sha256:8a705d15a45dc72008d892e2cff618f0dc1a1c4f33be51e629a7afa6d45ac282 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-542158-m02": (2.76662014s)
	I1107 23:22:12.612575  103523 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1107 23:22:12.777619  103523 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1107 23:22:12.777655  103523 start.go:306] JoinCluster complete in 3.092554611s
	I1107 23:22:12.777668  103523 cni.go:84] Creating CNI manager for ""
	I1107 23:22:12.777675  103523 cni.go:136] 2 nodes found, recommending kindnet
	I1107 23:22:12.777731  103523 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1107 23:22:12.781110  103523 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1107 23:22:12.781133  103523 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I1107 23:22:12.781140  103523 command_runner.go:130] > Device: 37h/55d	Inode: 560775      Links: 1
	I1107 23:22:12.781146  103523 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1107 23:22:12.781152  103523 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1107 23:22:12.781158  103523 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1107 23:22:12.781170  103523 command_runner.go:130] > Change: 2023-11-07 23:01:52.156027715 +0000
	I1107 23:22:12.781176  103523 command_runner.go:130] >  Birth: 2023-11-07 23:01:52.132026002 +0000
	I1107 23:22:12.781215  103523 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1107 23:22:12.781225  103523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1107 23:22:12.798100  103523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1107 23:22:13.020508  103523 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:22:13.020533  103523 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1107 23:22:13.020540  103523 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1107 23:22:13.020546  103523 command_runner.go:130] > daemonset.apps/kindnet configured
	I1107 23:22:13.020908  103523 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:22:13.021137  103523 kapi.go:59] client config for multinode-542158: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:22:13.021437  103523 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1107 23:22:13.021451  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:13.021458  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:13.021464  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:13.023691  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:13.023711  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:13.023718  103523 round_trippers.go:580]     Audit-Id: 8dc9c097-7d3f-4546-a8ab-7ed127ec8a28
	I1107 23:22:13.023724  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:13.023729  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:13.023734  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:13.023740  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:13.023752  103523 round_trippers.go:580]     Content-Length: 291
	I1107 23:22:13.023786  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:13 GMT
	I1107 23:22:13.023821  103523 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"eb618563-1594-48e0-bbf3-afdea9801507","resourceVersion":"393","creationTimestamp":"2023-11-07T23:21:41Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1107 23:22:13.023901  103523 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-542158" context rescaled to 1 replicas
	I1107 23:22:13.023928  103523 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1107 23:22:13.026629  103523 out.go:177] * Verifying Kubernetes components...
	I1107 23:22:13.028119  103523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:22:13.038997  103523 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:22:13.039267  103523 kapi.go:59] client config for multinode-542158: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.crt", KeyFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/profiles/multinode-542158/client.key", CAFile:"/home/jenkins/minikube-integration/17585-9432/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1107 23:22:13.039505  103523 node_ready.go:35] waiting up to 6m0s for node "multinode-542158-m02" to be "Ready" ...
	I1107 23:22:13.039571  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158-m02
	I1107 23:22:13.039579  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:13.039586  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:13.039597  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:13.042160  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:13.042186  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:13.042197  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:13.042207  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:13.042215  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:13 GMT
	I1107 23:22:13.042223  103523 round_trippers.go:580]     Audit-Id: 26aac244-c72b-48e7-bbe9-7073e4d5e55a
	I1107 23:22:13.042234  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:13.042246  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:13.042446  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158-m02","uid":"4b913db7-112f-46e0-b4c8-61e126dcd5cf","resourceVersion":"433","creationTimestamp":"2023-11-07T23:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:22:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1107 23:22:13.042892  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158-m02
	I1107 23:22:13.042912  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:13.042923  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:13.042930  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:13.045174  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:13.045194  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:13.045203  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:13.045211  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:13.045219  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:13.045228  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:13 GMT
	I1107 23:22:13.045240  103523 round_trippers.go:580]     Audit-Id: 12d6dccb-c2dc-4729-9758-f9c0477c15fe
	I1107 23:22:13.045253  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:13.045389  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158-m02","uid":"4b913db7-112f-46e0-b4c8-61e126dcd5cf","resourceVersion":"433","creationTimestamp":"2023-11-07T23:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:22:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1107 23:22:13.546436  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158-m02
	I1107 23:22:13.546463  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:13.546470  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:13.546476  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:13.548868  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:13.548889  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:13.548897  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:13 GMT
	I1107 23:22:13.548905  103523 round_trippers.go:580]     Audit-Id: 2bbcc887-cf4f-4737-9acf-e7f5f1471f78
	I1107 23:22:13.548913  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:13.548920  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:13.548928  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:13.548936  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:13.549064  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158-m02","uid":"4b913db7-112f-46e0-b4c8-61e126dcd5cf","resourceVersion":"433","creationTimestamp":"2023-11-07T23:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:22:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I1107 23:22:14.046748  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158-m02
	I1107 23:22:14.046779  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.046791  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.046802  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.049329  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:14.049355  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.049363  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.049369  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.049374  103523 round_trippers.go:580]     Audit-Id: 7feecc0f-ec96-4822-8d94-e5dba325b7c7
	I1107 23:22:14.049379  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.049387  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.049396  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.049569  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158-m02","uid":"4b913db7-112f-46e0-b4c8-61e126dcd5cf","resourceVersion":"447","creationTimestamp":"2023-11-07T23:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:22:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5176 chars]
	I1107 23:22:14.049884  103523 node_ready.go:49] node "multinode-542158-m02" has status "Ready":"True"
	I1107 23:22:14.049906  103523 node_ready.go:38] duration metric: took 1.010384671s waiting for node "multinode-542158-m02" to be "Ready" ...
	I1107 23:22:14.049915  103523 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:22:14.049992  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1107 23:22:14.050002  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.050012  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.050018  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.054653  103523 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1107 23:22:14.054676  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.054683  103523 round_trippers.go:580]     Audit-Id: 8b10a216-4d80-4e1f-af58-10518ef5864d
	I1107 23:22:14.054689  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.054694  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.054700  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.054708  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.054716  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.055313  103523 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"451"},"items":[{"metadata":{"name":"coredns-5dd5756b68-d4f2j","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"357e0565-e17e-4d94-9a73-7bd0152ba3af","resourceVersion":"389","creationTimestamp":"2023-11-07T23:21:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I1107 23:22:14.057539  103523 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d4f2j" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.057623  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-d4f2j
	I1107 23:22:14.057634  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.057641  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.057647  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.059863  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:14.059884  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.059893  103523 round_trippers.go:580]     Audit-Id: 03d63f14-b225-45ec-b6fe-bab9ef808f2b
	I1107 23:22:14.059901  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.059909  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.059921  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.059933  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.059942  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.060046  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-d4f2j","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"357e0565-e17e-4d94-9a73-7bd0152ba3af","resourceVersion":"389","creationTimestamp":"2023-11-07T23:21:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e05ff4d-59e7-4b04-84c7-67a06baf3ec5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1107 23:22:14.060476  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:22:14.060491  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.060498  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.060503  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.062536  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:14.062550  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.062557  103523 round_trippers.go:580]     Audit-Id: 1fe2a148-d3f9-4ab2-9e98-82e3ee1cf856
	I1107 23:22:14.062562  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.062568  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.062576  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.062586  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.062598  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.062723  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:22:14.063659  103523 pod_ready.go:92] pod "coredns-5dd5756b68-d4f2j" in "kube-system" namespace has status "Ready":"True"
	I1107 23:22:14.063684  103523 pod_ready.go:81] duration metric: took 6.119511ms waiting for pod "coredns-5dd5756b68-d4f2j" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.063701  103523 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.063801  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-542158
	I1107 23:22:14.063810  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.063825  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.063833  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.066576  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:14.066594  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.066601  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.066606  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.066611  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.066616  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.066622  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.066627  103523 round_trippers.go:580]     Audit-Id: 97213078-7737-42b5-a51f-a88a39a2b9bb
	I1107 23:22:14.066775  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-542158","namespace":"kube-system","uid":"ff322856-032e-409e-a32e-937f41b80534","resourceVersion":"279","creationTimestamp":"2023-11-07T23:21:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8118049fe5aee964ee5c4fa55a555ba4","kubernetes.io/config.mirror":"8118049fe5aee964ee5c4fa55a555ba4","kubernetes.io/config.seen":"2023-11-07T23:21:41.992952425Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1107 23:22:14.067153  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:22:14.067168  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.067179  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.067188  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.069202  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:14.069218  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.069224  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.069229  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.069234  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.069239  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.069245  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.069254  103523 round_trippers.go:580]     Audit-Id: e6244d62-1470-4125-8836-471d9741b761
	I1107 23:22:14.069450  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:22:14.069743  103523 pod_ready.go:92] pod "etcd-multinode-542158" in "kube-system" namespace has status "Ready":"True"
	I1107 23:22:14.069757  103523 pod_ready.go:81] duration metric: took 6.050089ms waiting for pod "etcd-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.069770  103523 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.069818  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-542158
	I1107 23:22:14.069825  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.069832  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.069838  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.072001  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:14.072015  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.072021  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.072026  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.072031  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.072036  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.072041  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.072046  103523 round_trippers.go:580]     Audit-Id: 9a88032a-073c-4856-848f-1681c357368f
	I1107 23:22:14.072227  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-542158","namespace":"kube-system","uid":"0a1da361-805c-4b8f-a3db-88e4834e12cb","resourceVersion":"253","creationTimestamp":"2023-11-07T23:21:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"9042e7c5330cfcad3544cd17028012a6","kubernetes.io/config.mirror":"9042e7c5330cfcad3544cd17028012a6","kubernetes.io/config.seen":"2023-11-07T23:21:41.992954381Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1107 23:22:14.072713  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:22:14.072729  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.072743  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.072753  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.074450  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:22:14.074471  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.074481  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.074490  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.074505  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.074522  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.074532  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.074542  103523 round_trippers.go:580]     Audit-Id: 3ac6b70b-60e4-4f29-afb0-aa59b11ef4cd
	I1107 23:22:14.074659  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:22:14.074946  103523 pod_ready.go:92] pod "kube-apiserver-multinode-542158" in "kube-system" namespace has status "Ready":"True"
	I1107 23:22:14.074961  103523 pod_ready.go:81] duration metric: took 5.185659ms waiting for pod "kube-apiserver-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.074970  103523 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.075022  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-542158
	I1107 23:22:14.075029  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.075036  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.075045  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.076902  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:22:14.076918  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.076926  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.076935  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.076943  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.076954  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.076964  103523 round_trippers.go:580]     Audit-Id: b2d50896-c494-469a-8b69-7fd92eb02a44
	I1107 23:22:14.076970  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.077148  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-542158","namespace":"kube-system","uid":"76366db7-a73c-4b9d-a0a5-e572a95585c6","resourceVersion":"267","creationTimestamp":"2023-11-07T23:21:40Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"63686c3fbb92c907ba59d1d8ac68e4fc","kubernetes.io/config.mirror":"63686c3fbb92c907ba59d1d8ac68e4fc","kubernetes.io/config.seen":"2023-11-07T23:21:36.059217058Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1107 23:22:14.077550  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:22:14.077566  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.077584  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.077604  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.079585  103523 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1107 23:22:14.079607  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.079617  103523 round_trippers.go:580]     Audit-Id: ce0c2892-d793-4740-9602-c3f1078471a1
	I1107 23:22:14.079624  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.079630  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.079635  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.079643  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.079648  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.079804  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:22:14.080174  103523 pod_ready.go:92] pod "kube-controller-manager-multinode-542158" in "kube-system" namespace has status "Ready":"True"
	I1107 23:22:14.080192  103523 pod_ready.go:81] duration metric: took 5.213338ms waiting for pod "kube-controller-manager-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.080203  103523 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5m8jq" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.247599  103523 request.go:629] Waited for 167.31376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5m8jq
	I1107 23:22:14.247650  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5m8jq
	I1107 23:22:14.247656  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.247666  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.247676  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.250035  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:14.250056  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.250063  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.250068  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.250073  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.250079  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.250084  103523 round_trippers.go:580]     Audit-Id: adb8ed1a-cc18-4456-af68-af6256485525
	I1107 23:22:14.250091  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.250230  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5m8jq","generateName":"kube-proxy-","namespace":"kube-system","uid":"546186cc-fa1d-43c0-8dea-81bfe7a6a835","resourceVersion":"370","creationTimestamp":"2023-11-07T23:21:55Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a73ed559-5e99-4814-8b20-df2d69624bd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a73ed559-5e99-4814-8b20-df2d69624bd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1107 23:22:14.447082  103523 request.go:629] Waited for 196.350144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:22:14.447148  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:22:14.447152  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.447159  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.447169  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.449541  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:14.449567  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.449574  103523 round_trippers.go:580]     Audit-Id: a5794af7-aef9-499f-847c-8e7f5a900237
	I1107 23:22:14.449580  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.449585  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.449592  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.449601  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.449610  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.449754  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:22:14.450199  103523 pod_ready.go:92] pod "kube-proxy-5m8jq" in "kube-system" namespace has status "Ready":"True"
	I1107 23:22:14.450231  103523 pod_ready.go:81] duration metric: took 369.999141ms waiting for pod "kube-proxy-5m8jq" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.450247  103523 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xztw4" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.647010  103523 request.go:629] Waited for 196.706034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xztw4
	I1107 23:22:14.647081  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xztw4
	I1107 23:22:14.647093  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.647101  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.647108  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.649458  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:14.649482  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.649493  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.649502  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.649510  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.649519  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.649527  103523 round_trippers.go:580]     Audit-Id: 52302ddb-cf90-4c35-86ae-ed3713dca6b3
	I1107 23:22:14.649537  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.649682  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xztw4","generateName":"kube-proxy-","namespace":"kube-system","uid":"936d46f1-3555-49d8-8291-0faafa0d6855","resourceVersion":"448","creationTimestamp":"2023-11-07T23:22:12Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a73ed559-5e99-4814-8b20-df2d69624bd5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a73ed559-5e99-4814-8b20-df2d69624bd5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1107 23:22:14.847555  103523 request.go:629] Waited for 197.374145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-542158-m02
	I1107 23:22:14.847619  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158-m02
	I1107 23:22:14.847623  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:14.847631  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:14.847637  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:14.850018  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:14.850038  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:14.850045  103523 round_trippers.go:580]     Audit-Id: 54e82f87-7426-4ec5-8be9-0dad6917ad58
	I1107 23:22:14.850050  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:14.850055  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:14.850060  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:14.850066  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:14.850072  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:14 GMT
	I1107 23:22:14.850168  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158-m02","uid":"4b913db7-112f-46e0-b4c8-61e126dcd5cf","resourceVersion":"447","creationTimestamp":"2023-11-07T23:22:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:22:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:22:12Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5176 chars]
	I1107 23:22:14.850460  103523 pod_ready.go:92] pod "kube-proxy-xztw4" in "kube-system" namespace has status "Ready":"True"
	I1107 23:22:14.850482  103523 pod_ready.go:81] duration metric: took 400.223478ms waiting for pod "kube-proxy-xztw4" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:14.850491  103523 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:15.046803  103523 request.go:629] Waited for 196.248232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-542158
	I1107 23:22:15.046881  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-542158
	I1107 23:22:15.046893  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:15.046901  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:15.046908  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:15.049391  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:15.049417  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:15.049428  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:15 GMT
	I1107 23:22:15.049437  103523 round_trippers.go:580]     Audit-Id: 55d2a7c4-934b-4b28-8a1e-d3c8c9ee8c92
	I1107 23:22:15.049446  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:15.049454  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:15.049463  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:15.049475  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:15.049624  103523 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-542158","namespace":"kube-system","uid":"ec3b8184-4819-4a08-8361-f951b553564c","resourceVersion":"255","creationTimestamp":"2023-11-07T23:21:42Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"27daf2b049ce91bfd4f81b0138764b44","kubernetes.io/config.mirror":"27daf2b049ce91bfd4f81b0138764b44","kubernetes.io/config.seen":"2023-11-07T23:21:41.992950355Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-07T23:21:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1107 23:22:15.247416  103523 request.go:629] Waited for 197.359358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:22:15.247501  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-542158
	I1107 23:22:15.247509  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:15.247516  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:15.247527  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:15.249711  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:15.249735  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:15.249745  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:15.249754  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:15.249763  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:15 GMT
	I1107 23:22:15.249772  103523 round_trippers.go:580]     Audit-Id: 6259da01-27dc-49dc-b12d-9b820d603fa0
	I1107 23:22:15.249781  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:15.249796  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:15.249952  103523 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-07T23:21:38Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1107 23:22:15.250360  103523 pod_ready.go:92] pod "kube-scheduler-multinode-542158" in "kube-system" namespace has status "Ready":"True"
	I1107 23:22:15.250380  103523 pod_ready.go:81] duration metric: took 399.878951ms waiting for pod "kube-scheduler-multinode-542158" in "kube-system" namespace to be "Ready" ...
	I1107 23:22:15.250393  103523 pod_ready.go:38] duration metric: took 1.200457226s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1107 23:22:15.250412  103523 system_svc.go:44] waiting for kubelet service to be running ....
	I1107 23:22:15.250466  103523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:22:15.261440  103523 system_svc.go:56] duration metric: took 11.020867ms WaitForService to wait for kubelet.
	I1107 23:22:15.261478  103523 kubeadm.go:581] duration metric: took 2.237526179s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1107 23:22:15.261500  103523 node_conditions.go:102] verifying NodePressure condition ...
	I1107 23:22:15.446862  103523 request.go:629] Waited for 185.288733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1107 23:22:15.446931  103523 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1107 23:22:15.446942  103523 round_trippers.go:469] Request Headers:
	I1107 23:22:15.446950  103523 round_trippers.go:473]     Accept: application/json, */*
	I1107 23:22:15.446956  103523 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1107 23:22:15.449374  103523 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1107 23:22:15.449397  103523 round_trippers.go:577] Response Headers:
	I1107 23:22:15.449408  103523 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fd363fcb-1371-43b1-80ba-4e6565aa65e1
	I1107 23:22:15.449417  103523 round_trippers.go:580]     Date: Tue, 07 Nov 2023 23:22:15 GMT
	I1107 23:22:15.449426  103523 round_trippers.go:580]     Audit-Id: 08d51690-4aab-4ccb-83ee-8105dc8fff5e
	I1107 23:22:15.449433  103523 round_trippers.go:580]     Cache-Control: no-cache, private
	I1107 23:22:15.449438  103523 round_trippers.go:580]     Content-Type: application/json
	I1107 23:22:15.449443  103523 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 245cf8bf-3921-43c6-b9f2-10f384212019
	I1107 23:22:15.449620  103523 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"multinode-542158","uid":"1c24c1e1-4042-42d5-b26a-cab49a35bd1f","resourceVersion":"376","creationTimestamp":"2023-11-07T23:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-542158","kubernetes.io/os":"linux","minikube.k8s.io/commit":"693359050ae80510825facc3cb57aa024560c29e","minikube.k8s.io/name":"multinode-542158","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_07T23_21_42_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12168 chars]
	I1107 23:22:15.450260  103523 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 23:22:15.450280  103523 node_conditions.go:123] node cpu capacity is 8
	I1107 23:22:15.450294  103523 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1107 23:22:15.450303  103523 node_conditions.go:123] node cpu capacity is 8
	I1107 23:22:15.450309  103523 node_conditions.go:105] duration metric: took 188.804768ms to run NodePressure ...
	I1107 23:22:15.450327  103523 start.go:228] waiting for startup goroutines ...
	I1107 23:22:15.450358  103523 start.go:242] writing updated cluster config ...
	I1107 23:22:15.450692  103523 ssh_runner.go:195] Run: rm -f paused
	I1107 23:22:15.496314  103523 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1107 23:22:15.499548  103523 out.go:177] * Done! kubectl is now configured to use "multinode-542158" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 07 23:21:58 multinode-542158 crio[956]: time="2023-11-07 23:21:58.458804527Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0471e99e89ca1f1e564b3eb90355af60155f231c100eff42ad765ccd0bc68460/merged/etc/passwd: no such file or directory"
	Nov 07 23:21:58 multinode-542158 crio[956]: time="2023-11-07 23:21:58.458838996Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0471e99e89ca1f1e564b3eb90355af60155f231c100eff42ad765ccd0bc68460/merged/etc/group: no such file or directory"
	Nov 07 23:21:58 multinode-542158 crio[956]: time="2023-11-07 23:21:58.498146315Z" level=info msg="Created container d57e559eca4d9a200a9ef3ff59956d74070a39715f4efe3c7e32ea31abb11843: kube-system/storage-provisioner/storage-provisioner" id=52b13ddc-8ec7-4934-9c11-279523c49bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 07 23:21:58 multinode-542158 crio[956]: time="2023-11-07 23:21:58.498765573Z" level=info msg="Starting container: d57e559eca4d9a200a9ef3ff59956d74070a39715f4efe3c7e32ea31abb11843" id=447c205b-e83d-4d49-9368-e50c2a0f54e2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 07 23:21:58 multinode-542158 crio[956]: time="2023-11-07 23:21:58.508988134Z" level=info msg="Started container" PID=2402 containerID=d57e559eca4d9a200a9ef3ff59956d74070a39715f4efe3c7e32ea31abb11843 description=kube-system/storage-provisioner/storage-provisioner id=447c205b-e83d-4d49-9368-e50c2a0f54e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=72ba568f0f9eb80c814451dadfe518d836f7af46108770361ca68a123170963f
	Nov 07 23:22:16 multinode-542158 crio[956]: time="2023-11-07 23:22:16.495196543Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-n8tmh/POD" id=e3da8a93-bbad-4c39-9d44-cd714f6339aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 07 23:22:16 multinode-542158 crio[956]: time="2023-11-07 23:22:16.495262007Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 07 23:22:16 multinode-542158 crio[956]: time="2023-11-07 23:22:16.509268080Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-n8tmh Namespace:default ID:bc192eda612121156cb7e50394d172cb4f3f053a805ac8a262d6f5216cb4ecd5 UID:9fbf235e-d96d-4647-b6a3-ed37b0df2874 NetNS:/var/run/netns/f41e773b-00d9-41ce-b4bc-f0a1e9f3a1c7 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 07 23:22:16 multinode-542158 crio[956]: time="2023-11-07 23:22:16.509311511Z" level=info msg="Adding pod default_busybox-5bc68d56bd-n8tmh to CNI network \"kindnet\" (type=ptp)"
	Nov 07 23:22:16 multinode-542158 crio[956]: time="2023-11-07 23:22:16.517772652Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-n8tmh Namespace:default ID:bc192eda612121156cb7e50394d172cb4f3f053a805ac8a262d6f5216cb4ecd5 UID:9fbf235e-d96d-4647-b6a3-ed37b0df2874 NetNS:/var/run/netns/f41e773b-00d9-41ce-b4bc-f0a1e9f3a1c7 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 07 23:22:16 multinode-542158 crio[956]: time="2023-11-07 23:22:16.517917572Z" level=info msg="Checking pod default_busybox-5bc68d56bd-n8tmh for CNI network kindnet (type=ptp)"
	Nov 07 23:22:16 multinode-542158 crio[956]: time="2023-11-07 23:22:16.547049909Z" level=info msg="Ran pod sandbox bc192eda612121156cb7e50394d172cb4f3f053a805ac8a262d6f5216cb4ecd5 with infra container: default/busybox-5bc68d56bd-n8tmh/POD" id=e3da8a93-bbad-4c39-9d44-cd714f6339aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 07 23:22:16 multinode-542158 crio[956]: time="2023-11-07 23:22:16.548288061Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=de4f43bd-ccec-4b1f-8562-a7e6db61bac6 name=/runtime.v1.ImageService/ImageStatus
	Nov 07 23:22:16 multinode-542158 crio[956]: time="2023-11-07 23:22:16.548516622Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=de4f43bd-ccec-4b1f-8562-a7e6db61bac6 name=/runtime.v1.ImageService/ImageStatus
	Nov 07 23:22:16 multinode-542158 crio[956]: time="2023-11-07 23:22:16.549242949Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=c1419c48-7d7d-41f8-bd49-70fddce78d07 name=/runtime.v1.ImageService/PullImage
	Nov 07 23:22:16 multinode-542158 crio[956]: time="2023-11-07 23:22:16.553767462Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 07 23:22:17 multinode-542158 crio[956]: time="2023-11-07 23:22:17.139453587Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 07 23:22:18 multinode-542158 crio[956]: time="2023-11-07 23:22:18.739531999Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=c1419c48-7d7d-41f8-bd49-70fddce78d07 name=/runtime.v1.ImageService/PullImage
	Nov 07 23:22:18 multinode-542158 crio[956]: time="2023-11-07 23:22:18.740562421Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=b10d0289-806a-4907-8a6d-8dfb0a9f670b name=/runtime.v1.ImageService/ImageStatus
	Nov 07 23:22:18 multinode-542158 crio[956]: time="2023-11-07 23:22:18.741319600Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b10d0289-806a-4907-8a6d-8dfb0a9f670b name=/runtime.v1.ImageService/ImageStatus
	Nov 07 23:22:18 multinode-542158 crio[956]: time="2023-11-07 23:22:18.742220876Z" level=info msg="Creating container: default/busybox-5bc68d56bd-n8tmh/busybox" id=04bee25e-66be-4612-ab68-bc93925212c2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 07 23:22:18 multinode-542158 crio[956]: time="2023-11-07 23:22:18.742357631Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 07 23:22:18 multinode-542158 crio[956]: time="2023-11-07 23:22:18.821622308Z" level=info msg="Created container fe0f6af47a7374dd4b956ee5e72015f1a267f8bbdc690d25309c983310ff7e7d: default/busybox-5bc68d56bd-n8tmh/busybox" id=04bee25e-66be-4612-ab68-bc93925212c2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 07 23:22:18 multinode-542158 crio[956]: time="2023-11-07 23:22:18.822433483Z" level=info msg="Starting container: fe0f6af47a7374dd4b956ee5e72015f1a267f8bbdc690d25309c983310ff7e7d" id=222ceda7-d4df-4d68-8033-77ef5114dac6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 07 23:22:18 multinode-542158 crio[956]: time="2023-11-07 23:22:18.832653687Z" level=info msg="Started container" PID=2531 containerID=fe0f6af47a7374dd4b956ee5e72015f1a267f8bbdc690d25309c983310ff7e7d description=default/busybox-5bc68d56bd-n8tmh/busybox id=222ceda7-d4df-4d68-8033-77ef5114dac6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bc192eda612121156cb7e50394d172cb4f3f053a805ac8a262d6f5216cb4ecd5
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fe0f6af47a737       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago       Running             busybox                   0                   bc192eda61212       busybox-5bc68d56bd-n8tmh
	d57e559eca4d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      24 seconds ago      Running             storage-provisioner       0                   72ba568f0f9eb       storage-provisioner
	96ebda5840bd3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      25 seconds ago      Running             coredns                   0                   e5372cd6ff53d       coredns-5dd5756b68-d4f2j
	49d2a3fd2e11c       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      27 seconds ago      Running             kindnet-cni               0                   e9364959dcb3d       kindnet-7hgsm
	040b910f43e38       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      27 seconds ago      Running             kube-proxy                0                   dd9c4ca51223b       kube-proxy-5m8jq
	efdb609b8a711       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      46 seconds ago      Running             kube-scheduler            0                   aa4cb4f361b5e       kube-scheduler-multinode-542158
	69bf73d647c51       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      46 seconds ago      Running             etcd                      0                   bc628332a485b       etcd-multinode-542158
	c687ec119abf5       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      46 seconds ago      Running             kube-apiserver            0                   d6144b5cfb33e       kube-apiserver-multinode-542158
	e1af41343e7b9       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      46 seconds ago      Running             kube-controller-manager   0                   84361427416f0       kube-controller-manager-multinode-542158
	
	* 
	* ==> coredns [96ebda5840bd33e7cbf9f651d2b0f30a235036ad03808f90c9a8de6cf2716314] <==
	* [INFO] 10.244.1.2:45604 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085098s
	[INFO] 10.244.0.3:50955 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113879s
	[INFO] 10.244.0.3:58111 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00177822s
	[INFO] 10.244.0.3:47232 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000060469s
	[INFO] 10.244.0.3:44980 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067446s
	[INFO] 10.244.0.3:52628 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001248586s
	[INFO] 10.244.0.3:48357 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000041567s
	[INFO] 10.244.0.3:57973 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060964s
	[INFO] 10.244.0.3:40207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000038489s
	[INFO] 10.244.1.2:47907 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114977s
	[INFO] 10.244.1.2:59089 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106337s
	[INFO] 10.244.1.2:55651 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053806s
	[INFO] 10.244.1.2:45828 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000494s
	[INFO] 10.244.0.3:37610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102056s
	[INFO] 10.244.0.3:37736 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091947s
	[INFO] 10.244.0.3:42893 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050398s
	[INFO] 10.244.0.3:52699 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057805s
	[INFO] 10.244.1.2:36421 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130234s
	[INFO] 10.244.1.2:35621 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154696s
	[INFO] 10.244.1.2:50804 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161031s
	[INFO] 10.244.1.2:35265 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100573s
	[INFO] 10.244.0.3:40885 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126847s
	[INFO] 10.244.0.3:35324 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093087s
	[INFO] 10.244.0.3:41184 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005963s
	[INFO] 10.244.0.3:46038 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059475s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-542158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-542158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=693359050ae80510825facc3cb57aa024560c29e
	                    minikube.k8s.io/name=multinode-542158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_07T23_21_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-542158
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:22:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:21:57 +0000   Tue, 07 Nov 2023 23:21:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:21:57 +0000   Tue, 07 Nov 2023 23:21:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:21:57 +0000   Tue, 07 Nov 2023 23:21:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:21:57 +0000   Tue, 07 Nov 2023 23:21:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-542158
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 26d1ec1bd8d64200a83e15b088aca538
	  System UUID:                94b5036b-91fe-4b00-b56f-76df37e0335b
	  Boot ID:                    c97cc438-dd92-4788-91bf-3e8db350d4d3
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-n8tmh                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-d4f2j                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     28s
	  kube-system                 etcd-multinode-542158                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         41s
	  kube-system                 kindnet-7hgsm                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      28s
	  kube-system                 kube-apiserver-multinode-542158             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-controller-manager-multinode-542158    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-proxy-5m8jq                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-scheduler-multinode-542158             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 42s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node multinode-542158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node multinode-542158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node multinode-542158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node multinode-542158 event: Registered Node multinode-542158 in Controller
	  Normal  NodeReady                26s   kubelet          Node multinode-542158 status is now: NodeReady
	
	
	Name:               multinode-542158-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-542158-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 Nov 2023 23:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-542158-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 Nov 2023 23:22:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 Nov 2023 23:22:13 +0000   Tue, 07 Nov 2023 23:22:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 Nov 2023 23:22:13 +0000   Tue, 07 Nov 2023 23:22:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 Nov 2023 23:22:13 +0000   Tue, 07 Nov 2023 23:22:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 Nov 2023 23:22:13 +0000   Tue, 07 Nov 2023 23:22:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-542158-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8bf3fa4bfbd4e62adc9a1d14eb887b8
	  System UUID:                e663d955-e8ef-445b-ae42-450ade528cd0
	  Boot ID:                    c97cc438-dd92-4788-91bf-3e8db350d4d3
	  Kernel Version:             5.15.0-1046-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-7phrb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-bm9lc               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11s
	  kube-system                 kube-proxy-xztw4            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10s                kube-proxy       
	  Normal  NodeHasSufficientMemory  11s (x5 over 13s)  kubelet          Node multinode-542158-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x5 over 13s)  kubelet          Node multinode-542158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x5 over 13s)  kubelet          Node multinode-542158-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10s                kubelet          Node multinode-542158-m02 status is now: NodeReady
	  Normal  RegisteredNode           9s                 node-controller  Node multinode-542158-m02 event: Registered Node multinode-542158-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.004919] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006591] FS-Cache: N-cookie d=0000000093c421a3{9p.inode} n=00000000fd5ed719
	[  +0.007437] FS-Cache: N-key=[8] '8aa00f0200000000'
	[  +0.258455] FS-Cache: Duplicate cookie detected
	[  +0.004692] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006756] FS-Cache: O-cookie d=0000000093c421a3{9p.inode} n=0000000051a2543c
	[  +0.007368] FS-Cache: O-key=[8] '97a00f0200000000'
	[  +0.004932] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006594] FS-Cache: N-cookie d=0000000093c421a3{9p.inode} n=00000000c900a7a4
	[  +0.008778] FS-Cache: N-key=[8] '97a00f0200000000'
	[  +8.636027] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 7 23:14] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[  +1.023718] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[  +2.019776] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[  +4.091691] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[  +8.191452] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[ +16.126809] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	[Nov 7 23:15] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de 06 3c 2c 7b b3 fa ef 8c 5f ae 3d 08 00
	
	* 
	* ==> etcd [69bf73d647c513685c7a806cd06b07a6e3b30fd928d34557a6057503a7060409] <==
	* {"level":"info","ts":"2023-11-07T23:21:36.887706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-11-07T23:21:36.888799Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-11-07T23:21:36.890977Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-07T23:21:36.891245Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-07T23:21:36.891284Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-07T23:21:36.891382Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-07T23:21:36.891397Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-07T23:21:37.818438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-07T23:21:37.818511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-07T23:21:37.818528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-11-07T23:21:37.818543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-07T23:21:37.81855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-07T23:21:37.81856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-11-07T23:21:37.818569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-07T23:21:37.819678Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:21:37.820521Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-542158 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-07T23:21:37.82053Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:21:37.820581Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-07T23:21:37.820832Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-07T23:21:37.820883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-07T23:21:37.820955Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:21:37.821044Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:21:37.821067Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-07T23:21:37.821734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-11-07T23:21:37.821833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  23:22:23 up  1:04,  0 users,  load average: 1.01, 0.96, 0.72
	Linux multinode-542158 5.15.0-1046-gcp #54~20.04.1-Ubuntu SMP Wed Oct 25 08:22:15 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [49d2a3fd2e11cf39dd47944b659bfe242ccd4bbfaf75dbe90df3814709f9965e] <==
	* I1107 23:21:56.585574       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1107 23:21:56.585667       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1107 23:21:56.585822       1 main.go:116] setting mtu 1500 for CNI 
	I1107 23:21:56.585851       1 main.go:146] kindnetd IP family: "ipv4"
	I1107 23:21:56.585882       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1107 23:21:56.885823       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1107 23:21:56.885854       1 main.go:227] handling current node
	I1107 23:22:06.991516       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1107 23:22:06.991548       1 main.go:227] handling current node
	I1107 23:22:17.004444       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1107 23:22:17.004471       1 main.go:227] handling current node
	I1107 23:22:17.004481       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1107 23:22:17.004486       1 main.go:250] Node multinode-542158-m02 has CIDR [10.244.1.0/24] 
	I1107 23:22:17.004643       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [c687ec119abf5b3650611710e2e1394e62d18b1fe62bab195c45f30dc987374b] <==
	* I1107 23:21:39.080933       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1107 23:21:39.080347       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1107 23:21:39.081172       1 aggregator.go:166] initial CRD sync complete...
	I1107 23:21:39.081194       1 autoregister_controller.go:141] Starting autoregister controller
	I1107 23:21:39.081202       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1107 23:21:39.081211       1 cache.go:39] Caches are synced for autoregister controller
	I1107 23:21:39.083305       1 controller.go:624] quota admission added evaluator for: namespaces
	I1107 23:21:39.083921       1 shared_informer.go:318] Caches are synced for configmaps
	E1107 23:21:39.093063       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1107 23:21:39.296116       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1107 23:21:39.847503       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1107 23:21:39.850940       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1107 23:21:39.850959       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1107 23:21:40.271087       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1107 23:21:40.307653       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1107 23:21:40.398745       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1107 23:21:40.404539       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1107 23:21:40.405502       1 controller.go:624] quota admission added evaluator for: endpoints
	I1107 23:21:40.409497       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1107 23:21:40.913796       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1107 23:21:41.913750       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1107 23:21:41.924235       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1107 23:21:41.934009       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1107 23:21:54.746225       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1107 23:21:55.698161       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [e1af41343e7b96d5dbab652617a3b93b1d1fa967c60b20d7ce4bde684695b3d0] <==
	* I1107 23:21:57.235460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="139.635µs"
	I1107 23:21:57.247198       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="127.648µs"
	I1107 23:21:58.126628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="147.911µs"
	I1107 23:21:58.144185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.637068ms"
	I1107 23:21:58.144325       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.531µs"
	I1107 23:21:59.693867       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1107 23:22:12.263138       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-542158-m02\" does not exist"
	I1107 23:22:12.275104       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xztw4"
	I1107 23:22:12.275141       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bm9lc"
	I1107 23:22:12.275556       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-542158-m02" podCIDRs=["10.244.1.0/24"]
	I1107 23:22:13.711134       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-542158-m02"
	I1107 23:22:14.694868       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-542158-m02"
	I1107 23:22:14.694866       1 event.go:307] "Event occurred" object="multinode-542158-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-542158-m02 event: Registered Node multinode-542158-m02 in Controller"
	I1107 23:22:16.174207       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1107 23:22:16.182394       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-7phrb"
	I1107 23:22:16.185869       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-n8tmh"
	I1107 23:22:16.189897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.836776ms"
	I1107 23:22:16.195037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.06757ms"
	I1107 23:22:16.195108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.448µs"
	I1107 23:22:16.198201       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.608µs"
	I1107 23:22:16.203086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.885µs"
	I1107 23:22:19.175036       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.823885ms"
	I1107 23:22:19.175122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.069µs"
	I1107 23:22:19.813356       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.610891ms"
	I1107 23:22:19.813475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.94µs"
	
	* 
	* ==> kube-proxy [040b910f43e3815960ff0a838d6a1617ec97538aa39411ff163706d2daf444c0] <==
	* I1107 23:21:56.588718       1 server_others.go:69] "Using iptables proxy"
	I1107 23:21:56.599410       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1107 23:21:56.619330       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1107 23:21:56.621373       1 server_others.go:152] "Using iptables Proxier"
	I1107 23:21:56.621411       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1107 23:21:56.621417       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1107 23:21:56.621447       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1107 23:21:56.621688       1 server.go:846] "Version info" version="v1.28.3"
	I1107 23:21:56.621714       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1107 23:21:56.622625       1 config.go:188] "Starting service config controller"
	I1107 23:21:56.622732       1 config.go:315] "Starting node config controller"
	I1107 23:21:56.622746       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1107 23:21:56.622788       1 config.go:97] "Starting endpoint slice config controller"
	I1107 23:21:56.623852       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1107 23:21:56.623342       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1107 23:21:56.723495       1 shared_informer.go:318] Caches are synced for service config
	I1107 23:21:56.724575       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1107 23:21:56.724589       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [efdb609b8a7111853eeb017920bf40a85774f52cfc68bd030dae8121d0b14733] <==
	* E1107 23:21:39.093549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1107 23:21:39.093554       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1107 23:21:39.093623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:21:39.093649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1107 23:21:39.093625       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1107 23:21:39.093687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1107 23:21:39.093765       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1107 23:21:39.093773       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1107 23:21:39.093784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1107 23:21:39.093792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1107 23:21:39.093878       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1107 23:21:39.093901       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1107 23:21:39.093883       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1107 23:21:39.093914       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1107 23:21:39.093946       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1107 23:21:39.093923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1107 23:21:39.950928       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1107 23:21:39.950977       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1107 23:21:40.066880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1107 23:21:40.066917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1107 23:21:40.087284       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1107 23:21:40.087319       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1107 23:21:40.088626       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1107 23:21:40.088664       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1107 23:21:40.385946       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 07 23:21:55 multinode-542158 kubelet[1592]: I1107 23:21:55.893581    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sphfk\" (UniqueName: \"kubernetes.io/projected/3d31a034-7445-45d3-9ad0-6dc7e44d4513-kube-api-access-sphfk\") pod \"kindnet-7hgsm\" (UID: \"3d31a034-7445-45d3-9ad0-6dc7e44d4513\") " pod="kube-system/kindnet-7hgsm"
	Nov 07 23:21:55 multinode-542158 kubelet[1592]: I1107 23:21:55.893688    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/546186cc-fa1d-43c0-8dea-81bfe7a6a835-kube-proxy\") pod \"kube-proxy-5m8jq\" (UID: \"546186cc-fa1d-43c0-8dea-81bfe7a6a835\") " pod="kube-system/kube-proxy-5m8jq"
	Nov 07 23:21:55 multinode-542158 kubelet[1592]: I1107 23:21:55.893726    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3d31a034-7445-45d3-9ad0-6dc7e44d4513-cni-cfg\") pod \"kindnet-7hgsm\" (UID: \"3d31a034-7445-45d3-9ad0-6dc7e44d4513\") " pod="kube-system/kindnet-7hgsm"
	Nov 07 23:21:55 multinode-542158 kubelet[1592]: I1107 23:21:55.893763    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d31a034-7445-45d3-9ad0-6dc7e44d4513-lib-modules\") pod \"kindnet-7hgsm\" (UID: \"3d31a034-7445-45d3-9ad0-6dc7e44d4513\") " pod="kube-system/kindnet-7hgsm"
	Nov 07 23:21:55 multinode-542158 kubelet[1592]: I1107 23:21:55.893797    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/546186cc-fa1d-43c0-8dea-81bfe7a6a835-lib-modules\") pod \"kube-proxy-5m8jq\" (UID: \"546186cc-fa1d-43c0-8dea-81bfe7a6a835\") " pod="kube-system/kube-proxy-5m8jq"
	Nov 07 23:21:55 multinode-542158 kubelet[1592]: I1107 23:21:55.893826    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d31a034-7445-45d3-9ad0-6dc7e44d4513-xtables-lock\") pod \"kindnet-7hgsm\" (UID: \"3d31a034-7445-45d3-9ad0-6dc7e44d4513\") " pod="kube-system/kindnet-7hgsm"
	Nov 07 23:21:56 multinode-542158 kubelet[1592]: W1107 23:21:56.180852    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2/crio-e9364959dcb3da3d76bf87a776e76c2a39bd578e3c94c1444b668d19fb1c3c9a WatchSource:0}: Error finding container e9364959dcb3da3d76bf87a776e76c2a39bd578e3c94c1444b668d19fb1c3c9a: Status 404 returned error can't find the container with id e9364959dcb3da3d76bf87a776e76c2a39bd578e3c94c1444b668d19fb1c3c9a
	Nov 07 23:21:56 multinode-542158 kubelet[1592]: W1107 23:21:56.181240    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2/crio-dd9c4ca51223b5f5d54da50d00feb9f6796dc323ed97e0b74b75e4ac13feb5f9 WatchSource:0}: Error finding container dd9c4ca51223b5f5d54da50d00feb9f6796dc323ed97e0b74b75e4ac13feb5f9: Status 404 returned error can't find the container with id dd9c4ca51223b5f5d54da50d00feb9f6796dc323ed97e0b74b75e4ac13feb5f9
	Nov 07 23:21:57 multinode-542158 kubelet[1592]: I1107 23:21:57.121870    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5m8jq" podStartSLOduration=2.121807055 podCreationTimestamp="2023-11-07 23:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:21:57.121593847 +0000 UTC m=+15.234494618" watchObservedRunningTime="2023-11-07 23:21:57.121807055 +0000 UTC m=+15.234707827"
	Nov 07 23:21:57 multinode-542158 kubelet[1592]: I1107 23:21:57.131147    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-7hgsm" podStartSLOduration=2.131084699 podCreationTimestamp="2023-11-07 23:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:21:57.13090956 +0000 UTC m=+15.243810332" watchObservedRunningTime="2023-11-07 23:21:57.131084699 +0000 UTC m=+15.243985470"
	Nov 07 23:21:57 multinode-542158 kubelet[1592]: I1107 23:21:57.213618    1592 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 07 23:21:57 multinode-542158 kubelet[1592]: I1107 23:21:57.235579    1592 topology_manager.go:215] "Topology Admit Handler" podUID="357e0565-e17e-4d94-9a73-7bd0152ba3af" podNamespace="kube-system" podName="coredns-5dd5756b68-d4f2j"
	Nov 07 23:21:57 multinode-542158 kubelet[1592]: I1107 23:21:57.303688    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/357e0565-e17e-4d94-9a73-7bd0152ba3af-config-volume\") pod \"coredns-5dd5756b68-d4f2j\" (UID: \"357e0565-e17e-4d94-9a73-7bd0152ba3af\") " pod="kube-system/coredns-5dd5756b68-d4f2j"
	Nov 07 23:21:57 multinode-542158 kubelet[1592]: I1107 23:21:57.303832    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4f9l\" (UniqueName: \"kubernetes.io/projected/357e0565-e17e-4d94-9a73-7bd0152ba3af-kube-api-access-r4f9l\") pod \"coredns-5dd5756b68-d4f2j\" (UID: \"357e0565-e17e-4d94-9a73-7bd0152ba3af\") " pod="kube-system/coredns-5dd5756b68-d4f2j"
	Nov 07 23:21:57 multinode-542158 kubelet[1592]: W1107 23:21:57.580567    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2/crio-e5372cd6ff53da706495460a18749bb9e3bfe8b178165f9a1e4d873f26146735 WatchSource:0}: Error finding container e5372cd6ff53da706495460a18749bb9e3bfe8b178165f9a1e4d873f26146735: Status 404 returned error can't find the container with id e5372cd6ff53da706495460a18749bb9e3bfe8b178165f9a1e4d873f26146735
	Nov 07 23:21:58 multinode-542158 kubelet[1592]: I1107 23:21:58.101829    1592 topology_manager.go:215] "Topology Admit Handler" podUID="cd4b23c6-8cf3-4f1a-909d-4f727d1ecebd" podNamespace="kube-system" podName="storage-provisioner"
	Nov 07 23:21:58 multinode-542158 kubelet[1592]: I1107 23:21:58.126581    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-d4f2j" podStartSLOduration=3.126528398 podCreationTimestamp="2023-11-07 23:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:21:58.126310441 +0000 UTC m=+16.239211213" watchObservedRunningTime="2023-11-07 23:21:58.126528398 +0000 UTC m=+16.239429168"
	Nov 07 23:21:58 multinode-542158 kubelet[1592]: I1107 23:21:58.208957    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm2lq\" (UniqueName: \"kubernetes.io/projected/cd4b23c6-8cf3-4f1a-909d-4f727d1ecebd-kube-api-access-zm2lq\") pod \"storage-provisioner\" (UID: \"cd4b23c6-8cf3-4f1a-909d-4f727d1ecebd\") " pod="kube-system/storage-provisioner"
	Nov 07 23:21:58 multinode-542158 kubelet[1592]: I1107 23:21:58.209210    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cd4b23c6-8cf3-4f1a-909d-4f727d1ecebd-tmp\") pod \"storage-provisioner\" (UID: \"cd4b23c6-8cf3-4f1a-909d-4f727d1ecebd\") " pod="kube-system/storage-provisioner"
	Nov 07 23:21:58 multinode-542158 kubelet[1592]: W1107 23:21:58.444674    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2/crio-72ba568f0f9eb80c814451dadfe518d836f7af46108770361ca68a123170963f WatchSource:0}: Error finding container 72ba568f0f9eb80c814451dadfe518d836f7af46108770361ca68a123170963f: Status 404 returned error can't find the container with id 72ba568f0f9eb80c814451dadfe518d836f7af46108770361ca68a123170963f
	Nov 07 23:22:16 multinode-542158 kubelet[1592]: I1107 23:22:16.191940    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=20.191834812 podCreationTimestamp="2023-11-07 23:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-07 23:21:59.128289799 +0000 UTC m=+17.241190570" watchObservedRunningTime="2023-11-07 23:22:16.191834812 +0000 UTC m=+34.304735583"
	Nov 07 23:22:16 multinode-542158 kubelet[1592]: I1107 23:22:16.192966    1592 topology_manager.go:215] "Topology Admit Handler" podUID="9fbf235e-d96d-4647-b6a3-ed37b0df2874" podNamespace="default" podName="busybox-5bc68d56bd-n8tmh"
	Nov 07 23:22:16 multinode-542158 kubelet[1592]: I1107 23:22:16.224689    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbz2r\" (UniqueName: \"kubernetes.io/projected/9fbf235e-d96d-4647-b6a3-ed37b0df2874-kube-api-access-hbz2r\") pod \"busybox-5bc68d56bd-n8tmh\" (UID: \"9fbf235e-d96d-4647-b6a3-ed37b0df2874\") " pod="default/busybox-5bc68d56bd-n8tmh"
	Nov 07 23:22:16 multinode-542158 kubelet[1592]: W1107 23:22:16.544705    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2/crio-bc192eda612121156cb7e50394d172cb4f3f053a805ac8a262d6f5216cb4ecd5 WatchSource:0}: Error finding container bc192eda612121156cb7e50394d172cb4f3f053a805ac8a262d6f5216cb4ecd5: Status 404 returned error can't find the container with id bc192eda612121156cb7e50394d172cb4f3f053a805ac8a262d6f5216cb4ecd5
	Nov 07 23:22:19 multinode-542158 kubelet[1592]: I1107 23:22:19.169430    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-n8tmh" podStartSLOduration=0.978010795 podCreationTimestamp="2023-11-07 23:22:16 +0000 UTC" firstStartedPulling="2023-11-07 23:22:16.548708367 +0000 UTC m=+34.661609132" lastFinishedPulling="2023-11-07 23:22:18.740082902 +0000 UTC m=+36.852983665" observedRunningTime="2023-11-07 23:22:19.169374887 +0000 UTC m=+37.282275659" watchObservedRunningTime="2023-11-07 23:22:19.169385328 +0000 UTC m=+37.282286098"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-542158 -n multinode-542158
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-542158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.1430948998.exe start -p running-upgrade-800740 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.1430948998.exe start -p running-upgrade-800740 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m2.33102874s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-800740 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-800740 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.84541423s)

                                                
                                                
-- stdout --
	* [running-upgrade-800740] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-800740 in cluster running-upgrade-800740
	* Pulling base image ...
	* Updating the running docker "running-upgrade-800740" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:35:56.601977  199972 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:35:56.602114  199972 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:35:56.602124  199972 out.go:309] Setting ErrFile to fd 2...
	I1107 23:35:56.602128  199972 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:35:56.602375  199972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	I1107 23:35:56.603040  199972 out.go:303] Setting JSON to false
	I1107 23:35:56.604965  199972 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4707,"bootTime":1699395450,"procs":614,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:35:56.605034  199972 start.go:138] virtualization: kvm guest
	I1107 23:35:56.607804  199972 out.go:177] * [running-upgrade-800740] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:35:56.609769  199972 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:35:56.609824  199972 notify.go:220] Checking for updates...
	I1107 23:35:56.611597  199972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:35:56.613374  199972 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:35:56.614975  199972 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	I1107 23:35:56.616501  199972 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:35:56.618569  199972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:35:56.620727  199972 config.go:182] Loaded profile config "running-upgrade-800740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1107 23:35:56.620760  199972 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:35:56.622857  199972 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1107 23:35:56.624328  199972 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:35:56.647929  199972 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:35:56.648067  199972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:35:56.701956  199972 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:72 SystemTime:2023-11-07 23:35:56.693096897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:35:56.702067  199972 docker.go:295] overlay module found
	I1107 23:35:56.704248  199972 out.go:177] * Using the docker driver based on existing profile
	I1107 23:35:56.705926  199972 start.go:298] selected driver: docker
	I1107 23:35:56.705946  199972 start.go:902] validating driver "docker" against &{Name:running-upgrade-800740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-800740 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1107 23:35:56.706060  199972 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:35:56.706897  199972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:35:56.762056  199972 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:72 SystemTime:2023-11-07 23:35:56.752464097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:35:56.762389  199972 cni.go:84] Creating CNI manager for ""
	I1107 23:35:56.762411  199972 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1107 23:35:56.762421  199972 start_flags.go:323] config:
	{Name:running-upgrade-800740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-800740 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1107 23:35:56.765829  199972 out.go:177] * Starting control plane node running-upgrade-800740 in cluster running-upgrade-800740
	I1107 23:35:56.767496  199972 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:35:56.769100  199972 out.go:177] * Pulling base image ...
	I1107 23:35:56.770458  199972 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1107 23:35:56.770497  199972 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:35:56.788301  199972 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 23:35:56.788330  199972 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	W1107 23:35:56.954928  199972 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1107 23:35:56.955119  199972 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/running-upgrade-800740/config.json ...
	I1107 23:35:56.955210  199972 cache.go:107] acquiring lock: {Name:mkae279c77b7cb64f13a4549cc047c229649e198 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:35:56.955248  199972 cache.go:107] acquiring lock: {Name:mkf37917f59d3a5ddd2a51df4b9acfb7b94d0987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:35:56.955256  199972 cache.go:107] acquiring lock: {Name:mk24237c3b767666869d5d2b399a8b34efdadd51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:35:56.955294  199972 cache.go:107] acquiring lock: {Name:mk55820168f880ee22f25c69912b26faef0ce366 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:35:56.955209  199972 cache.go:107] acquiring lock: {Name:mkb5f18b742bd2a425d33f555ca59cd34c58b390 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:35:56.955319  199972 cache.go:107] acquiring lock: {Name:mkabbcca1dde38cd56df5b4833476219180ba252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:35:56.955354  199972 cache.go:115] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1107 23:35:56.955369  199972 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 171.926µs
	I1107 23:35:56.955375  199972 cache.go:115] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1107 23:35:56.955380  199972 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1107 23:35:56.955380  199972 cache.go:115] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1107 23:35:56.955384  199972 cache.go:115] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1107 23:35:56.955386  199972 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 206.951µs
	I1107 23:35:56.955390  199972 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 98.139µs
	I1107 23:35:56.955402  199972 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1107 23:35:56.955404  199972 cache.go:115] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1107 23:35:56.955405  199972 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1107 23:35:56.955397  199972 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 148.141µs
	I1107 23:35:56.955411  199972 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 94.943µs
	I1107 23:35:56.955415  199972 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1107 23:35:56.955416  199972 cache.go:115] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1107 23:35:56.955419  199972 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1107 23:35:56.955425  199972 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 187.518µs
	I1107 23:35:56.955434  199972 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1107 23:35:56.955409  199972 cache.go:107] acquiring lock: {Name:mk2c6c55b3a23d82cef2dcedccedbc4843d7f04a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:35:56.955420  199972 cache.go:107] acquiring lock: {Name:mk6bacc946581af8ef4b73a62c153003adaabeaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:35:56.955501  199972 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:35:56.955535  199972 start.go:365] acquiring machines lock for running-upgrade-800740: {Name:mkc58f6a2b4e466cf3f532ff618cca49a1f053cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:35:56.955628  199972 cache.go:115] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1107 23:35:56.955647  199972 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 264.518µs
	I1107 23:35:56.955671  199972 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1107 23:35:56.955671  199972 cache.go:115] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1107 23:35:56.955691  199972 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 322.916µs
	I1107 23:35:56.955725  199972 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1107 23:35:56.955736  199972 cache.go:87] Successfully saved all images to host disk.
	I1107 23:35:56.955693  199972 start.go:369] acquired machines lock for "running-upgrade-800740" in 139.708µs
	I1107 23:35:56.955759  199972 start.go:96] Skipping create...Using existing machine configuration
	I1107 23:35:56.955801  199972 fix.go:54] fixHost starting: m01
	I1107 23:35:56.956071  199972 cli_runner.go:164] Run: docker container inspect running-upgrade-800740 --format={{.State.Status}}
	I1107 23:35:56.973530  199972 fix.go:102] recreateIfNeeded on running-upgrade-800740: state=Running err=<nil>
	W1107 23:35:56.973564  199972 fix.go:128] unexpected machine state, will restart: <nil>
	I1107 23:35:56.976931  199972 out.go:177] * Updating the running docker "running-upgrade-800740" container ...
	I1107 23:35:56.978553  199972 machine.go:88] provisioning docker machine ...
	I1107 23:35:56.978575  199972 ubuntu.go:169] provisioning hostname "running-upgrade-800740"
	I1107 23:35:56.978622  199972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-800740
	I1107 23:35:56.996065  199972 main.go:141] libmachine: Using SSH client type: native
	I1107 23:35:56.996447  199972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I1107 23:35:56.996468  199972 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-800740 && echo "running-upgrade-800740" | sudo tee /etc/hostname
	I1107 23:35:57.112760  199972 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-800740
	
	I1107 23:35:57.112843  199972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-800740
	I1107 23:35:57.131292  199972 main.go:141] libmachine: Using SSH client type: native
	I1107 23:35:57.131613  199972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I1107 23:35:57.131632  199972 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-800740' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-800740/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-800740' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:35:57.239793  199972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:35:57.239832  199972 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9432/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9432/.minikube}
	I1107 23:35:57.239863  199972 ubuntu.go:177] setting up certificates
	I1107 23:35:57.239879  199972 provision.go:83] configureAuth start
	I1107 23:35:57.239942  199972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-800740
	I1107 23:35:57.259439  199972 provision.go:138] copyHostCerts
	I1107 23:35:57.259502  199972 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem, removing ...
	I1107 23:35:57.259518  199972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem
	I1107 23:35:57.259573  199972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem (1078 bytes)
	I1107 23:35:57.259691  199972 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem, removing ...
	I1107 23:35:57.259703  199972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem
	I1107 23:35:57.259728  199972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem (1123 bytes)
	I1107 23:35:57.259822  199972 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem, removing ...
	I1107 23:35:57.259833  199972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem
	I1107 23:35:57.259860  199972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem (1675 bytes)
	I1107 23:35:57.259930  199972 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-800740 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-800740]
	I1107 23:35:57.899880  199972 provision.go:172] copyRemoteCerts
	I1107 23:35:57.899953  199972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:35:57.899989  199972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-800740
	I1107 23:35:57.918414  199972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/running-upgrade-800740/id_rsa Username:docker}
	I1107 23:35:57.999080  199972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:35:58.016782  199972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1107 23:35:58.034855  199972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:35:58.054347  199972 provision.go:86] duration metric: configureAuth took 814.454209ms
	I1107 23:35:58.054375  199972 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:35:58.054567  199972 config.go:182] Loaded profile config "running-upgrade-800740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1107 23:35:58.054695  199972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-800740
	I1107 23:35:58.072774  199972 main.go:141] libmachine: Using SSH client type: native
	I1107 23:35:58.073093  199972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32969 <nil> <nil>}
	I1107 23:35:58.073114  199972 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:35:58.512686  199972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:35:58.512712  199972 machine.go:91] provisioned docker machine in 1.534144887s
	I1107 23:35:58.512726  199972 start.go:300] post-start starting for "running-upgrade-800740" (driver="docker")
	I1107 23:35:58.512741  199972 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:35:58.512807  199972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:35:58.512853  199972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-800740
	I1107 23:35:58.531122  199972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/running-upgrade-800740/id_rsa Username:docker}
	I1107 23:35:58.615667  199972 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:35:58.619675  199972 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:35:58.619717  199972 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:35:58.619733  199972 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:35:58.619742  199972 info.go:137] Remote host: Ubuntu 19.10
	I1107 23:35:58.619792  199972 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/addons for local assets ...
	I1107 23:35:58.619867  199972 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/files for local assets ...
	I1107 23:35:58.619980  199972 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem -> 162112.pem in /etc/ssl/certs
	I1107 23:35:58.620117  199972 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:35:58.628343  199972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem --> /etc/ssl/certs/162112.pem (1708 bytes)
	I1107 23:35:58.645461  199972 start.go:303] post-start completed in 132.71743ms
	I1107 23:35:58.645537  199972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:35:58.645619  199972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-800740
	I1107 23:35:58.664482  199972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/running-upgrade-800740/id_rsa Username:docker}
	I1107 23:35:58.744593  199972 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:35:58.748661  199972 fix.go:56] fixHost completed within 1.792853335s
	I1107 23:35:58.748692  199972 start.go:83] releasing machines lock for "running-upgrade-800740", held for 1.792939111s
	I1107 23:35:58.748763  199972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-800740
	I1107 23:35:58.765977  199972 ssh_runner.go:195] Run: cat /version.json
	I1107 23:35:58.766010  199972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:35:58.766036  199972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-800740
	I1107 23:35:58.766087  199972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-800740
	I1107 23:35:58.785769  199972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/running-upgrade-800740/id_rsa Username:docker}
	I1107 23:35:58.787069  199972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32969 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/running-upgrade-800740/id_rsa Username:docker}
	W1107 23:35:58.863295  199972 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1107 23:35:58.863370  199972 ssh_runner.go:195] Run: systemctl --version
	I1107 23:35:58.867426  199972 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:35:58.929066  199972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:35:58.937070  199972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:35:58.954781  199972 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:35:58.954863  199972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:35:58.994618  199972 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1107 23:35:58.994640  199972 start.go:472] detecting cgroup driver to use...
	I1107 23:35:58.994666  199972 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:35:58.994701  199972 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:35:59.021709  199972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:35:59.031053  199972 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:35:59.031105  199972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:35:59.040386  199972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:35:59.049075  199972 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1107 23:35:59.058605  199972 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1107 23:35:59.058670  199972 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:35:59.161656  199972 docker.go:219] disabling docker service ...
	I1107 23:35:59.161719  199972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:35:59.173067  199972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:35:59.182507  199972 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:35:59.265688  199972 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:35:59.344428  199972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:35:59.355011  199972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:35:59.369345  199972 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1107 23:35:59.369416  199972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:35:59.380747  199972 out.go:177] 
	W1107 23:35:59.382546  199972 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1107 23:35:59.382574  199972 out.go:239] * 
	* 
	W1107 23:35:59.383439  199972 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 23:35:59.384586  199972 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-800740 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-07 23:35:59.403348831 +0000 UTC m=+2101.096236683
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-800740
helpers_test.go:235: (dbg) docker inspect running-upgrade-800740:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "599cb123dc0abe8666a0359354cf666a92b7e878047d6f0751e0f070ac0cd207",
	        "Created": "2023-11-07T23:34:54.572747314Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 189051,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:34:55.119363011Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/599cb123dc0abe8666a0359354cf666a92b7e878047d6f0751e0f070ac0cd207/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/599cb123dc0abe8666a0359354cf666a92b7e878047d6f0751e0f070ac0cd207/hostname",
	        "HostsPath": "/var/lib/docker/containers/599cb123dc0abe8666a0359354cf666a92b7e878047d6f0751e0f070ac0cd207/hosts",
	        "LogPath": "/var/lib/docker/containers/599cb123dc0abe8666a0359354cf666a92b7e878047d6f0751e0f070ac0cd207/599cb123dc0abe8666a0359354cf666a92b7e878047d6f0751e0f070ac0cd207-json.log",
	        "Name": "/running-upgrade-800740",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-800740:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9e819f76c17c6016527766531a5cb4552da00d27f569da4d61720d8a770d3d11-init/diff:/var/lib/docker/overlay2/36efc57635ac8ce8c5877296448c3209fbbc8f1ce2c12dd0b7e2cbc89d9c8221/diff:/var/lib/docker/overlay2/a7e687f63bfba4a7a53924051f0d6f11fb0ecb936dd6d73ca6f3ed865db80bad/diff:/var/lib/docker/overlay2/9f754abb1844cba3704ef82fa0aece2d5927eff5104d0e098b346534e9a4ca9a/diff:/var/lib/docker/overlay2/bccbe070e71c93b8e5df0d956e8f8d9f78c1b2feaa0537ddb213c1e7f4277a6f/diff:/var/lib/docker/overlay2/57feb90d24615f465078fe77a40bef084c58fb8de0be566c92cce5d6646f663c/diff:/var/lib/docker/overlay2/2831518399e457aa7e65f552834989ada90161fdaeec78284698ab660b862889/diff:/var/lib/docker/overlay2/318b7402755108732248417b2b1850b014e8259538fbd50d95d65fd0a02680d0/diff:/var/lib/docker/overlay2/6ee0d75d0bfc59f1c1ee25e5ccbbfdd58d9175c4612695e18ae4d5fe0d33cd9f/diff:/var/lib/docker/overlay2/d934c4bd56d74a76b7080846f3472f12a314c44a20983d56988428c63f31a95d/diff:/var/lib/docker/overlay2/3c68c5
7f5bc35c9585d6fc86ac8b833f8c70189dfcbabaacbbcd7aa920af4cd1/diff:/var/lib/docker/overlay2/6cd036b6d61ef9a1b287ae3a9798c2200d8cdfdbadc3ae6e889062f9e4fa3eab/diff:/var/lib/docker/overlay2/9e38abb4d556a201fd276188b9c424a52f3ea81715a11b881d3fb3a000667e3d/diff:/var/lib/docker/overlay2/0acf07cffd0114bc87b703d346a217a6dc60ca513e26e34a657d0c549d28599b/diff:/var/lib/docker/overlay2/1c01dc7337700e97358e9ddc82fb124cca54fba519c5508e1af0b9b69f87ef2a/diff:/var/lib/docker/overlay2/fc21cb144b295e70d051b6e518e886d8957c6390f6c48d7e00437ed35a1b908f/diff:/var/lib/docker/overlay2/0efec5dd92a5df0bd523e0975c083d6fbce1e5b4559c95a9cbfd0b48a42a8008/diff:/var/lib/docker/overlay2/e00a9ca918541621220fc3a74b984d31c78d8002530ea0c52df97cf970d41747/diff:/var/lib/docker/overlay2/45dcec53f22fdcb34f565f740caa9969ec84eb72d8254c61e7e36cbc68b24000/diff:/var/lib/docker/overlay2/d4b12c765d4b16e693babe8d6e1ca41418289e9aab8f9dc6aef99cb29df9fd20/diff:/var/lib/docker/overlay2/ec54c1e10f07a3b6e43ccef5f71b7cef30684572885d2b31cf27c5e83da8ff11/diff:/var/lib/d
ocker/overlay2/a57add8b1c0134bf4f93fdc4902f8568c3288d648acca3cae66d858a7314bb99/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e819f76c17c6016527766531a5cb4552da00d27f569da4d61720d8a770d3d11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e819f76c17c6016527766531a5cb4552da00d27f569da4d61720d8a770d3d11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e819f76c17c6016527766531a5cb4552da00d27f569da4d61720d8a770d3d11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-800740",
	                "Source": "/var/lib/docker/volumes/running-upgrade-800740/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-800740",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-800740",
	                "name.minikube.sigs.k8s.io": "running-upgrade-800740",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d2e141d52e962c44bc008ce51d1eaccb0513f00c17e120f43663e4e312799ffc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d2e141d52e96",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "5eb7d2a756280282d28467c26710fc14cdfe8bc8f449a2f21d2e5c682f739d43",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "935a7fddac4648c97ba59cba08acff57faabe2cf03f7223418fd4d1a7babe7f4",
	                    "EndpointID": "5eb7d2a756280282d28467c26710fc14cdfe8bc8f449a2f21d2e5c682f739d43",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-800740 -n running-upgrade-800740
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-800740 -n running-upgrade-800740: exit status 4 (314.420026ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 23:35:59.704908  201105 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-800740" does not appear in /home/jenkins/minikube-integration/17585-9432/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-800740" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-800740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-800740
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-800740: (1.879121427s)
--- FAIL: TestRunningBinaryUpgrade (69.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (114.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.682541755.exe start -p stopped-upgrade-951392 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.682541755.exe start -p stopped-upgrade-951392 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m42.997717042s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.682541755.exe -p stopped-upgrade-951392 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.682541755.exe -p stopped-upgrade-951392 stop: (1.129152988s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-951392 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-951392 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (9.883161373s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-951392] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-951392 in cluster stopped-upgrade-951392
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-951392" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:34:39.527501  185092 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:34:39.527797  185092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:34:39.527808  185092 out.go:309] Setting ErrFile to fd 2...
	I1107 23:34:39.527816  185092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:34:39.527998  185092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	I1107 23:34:39.528529  185092 out.go:303] Setting JSON to false
	I1107 23:34:39.530026  185092 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4630,"bootTime":1699395450,"procs":672,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:34:39.530091  185092 start.go:138] virtualization: kvm guest
	I1107 23:34:39.532622  185092 out.go:177] * [stopped-upgrade-951392] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:34:39.534414  185092 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:34:39.534380  185092 notify.go:220] Checking for updates...
	I1107 23:34:39.538569  185092 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:34:39.540303  185092 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:34:39.542006  185092 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	I1107 23:34:39.543749  185092 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:34:39.545266  185092 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:34:39.547267  185092 config.go:182] Loaded profile config "stopped-upgrade-951392": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1107 23:34:39.547287  185092 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 23:34:39.549203  185092 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1107 23:34:39.550688  185092 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:34:39.597166  185092 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:34:39.597277  185092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:34:39.665181  185092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:82 SystemTime:2023-11-07 23:34:39.65496001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:34:39.665338  185092 docker.go:295] overlay module found
	I1107 23:34:39.668385  185092 out.go:177] * Using the docker driver based on existing profile
	I1107 23:34:39.670841  185092 start.go:298] selected driver: docker
	I1107 23:34:39.670863  185092 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-951392 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-951392 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1107 23:34:39.670977  185092 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:34:39.671847  185092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:34:39.747118  185092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:74 SystemTime:2023-11-07 23:34:39.737120233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:34:39.747451  185092 cni.go:84] Creating CNI manager for ""
	I1107 23:34:39.747474  185092 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1107 23:34:39.747496  185092 start_flags.go:323] config:
	{Name:stopped-upgrade-951392 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-951392 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1107 23:34:39.751097  185092 out.go:177] * Starting control plane node stopped-upgrade-951392 in cluster stopped-upgrade-951392
	I1107 23:34:39.752652  185092 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:34:39.754124  185092 out.go:177] * Pulling base image ...
	I1107 23:34:39.755571  185092 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1107 23:34:39.755666  185092 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:34:39.801075  185092 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 23:34:39.801107  185092 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	W1107 23:34:39.861549  185092 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1107 23:34:39.861747  185092 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/stopped-upgrade-951392/config.json ...
	I1107 23:34:39.861860  185092 cache.go:107] acquiring lock: {Name:mk6bacc946581af8ef4b73a62c153003adaabeaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:34:39.861939  185092 cache.go:107] acquiring lock: {Name:mkf37917f59d3a5ddd2a51df4b9acfb7b94d0987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:34:39.861973  185092 cache.go:107] acquiring lock: {Name:mk55820168f880ee22f25c69912b26faef0ce366 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:34:39.862049  185092 cache.go:194] Successfully downloaded all kic artifacts
	I1107 23:34:39.862067  185092 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I1107 23:34:39.862077  185092 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:34:39.862098  185092 cache.go:107] acquiring lock: {Name:mk24237c3b767666869d5d2b399a8b34efdadd51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:34:39.862104  185092 cache.go:107] acquiring lock: {Name:mkabbcca1dde38cd56df5b4833476219180ba252 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:34:39.862171  185092 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1107 23:34:39.862205  185092 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1107 23:34:39.861909  185092 cache.go:107] acquiring lock: {Name:mkb5f18b742bd2a425d33f555ca59cd34c58b390 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:34:39.862047  185092 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I1107 23:34:39.862313  185092 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1107 23:34:39.862078  185092 start.go:365] acquiring machines lock for stopped-upgrade-951392: {Name:mk03e055b42983efa2db1e9c1a69f8fc280b5d79 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:34:39.862069  185092 cache.go:107] acquiring lock: {Name:mk2c6c55b3a23d82cef2dcedccedbc4843d7f04a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:34:39.862451  185092 start.go:369] acquired machines lock for "stopped-upgrade-951392" in 107.096µs
	I1107 23:34:39.862482  185092 start.go:96] Skipping create...Using existing machine configuration
	I1107 23:34:39.862487  185092 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I1107 23:34:39.862491  185092 fix.go:54] fixHost starting: m01
	I1107 23:34:39.862805  185092 cli_runner.go:164] Run: docker container inspect stopped-upgrade-951392 --format={{.State.Status}}
	I1107 23:34:39.861859  185092 cache.go:107] acquiring lock: {Name:mkae279c77b7cb64f13a4549cc047c229649e198 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 23:34:39.863095  185092 cache.go:115] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1107 23:34:39.863114  185092 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.264995ms
	I1107 23:34:39.863140  185092 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1107 23:34:39.863232  185092 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1107 23:34:39.863424  185092 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I1107 23:34:39.863444  185092 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1107 23:34:39.863480  185092 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I1107 23:34:39.863425  185092 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1107 23:34:39.863619  185092 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I1107 23:34:39.863654  185092 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I1107 23:34:39.884029  185092 fix.go:102] recreateIfNeeded on stopped-upgrade-951392: state=Stopped err=<nil>
	W1107 23:34:39.884086  185092 fix.go:128] unexpected machine state, will restart: <nil>
	I1107 23:34:39.886578  185092 out.go:177] * Restarting existing docker container for "stopped-upgrade-951392" ...
	I1107 23:34:39.888365  185092 cli_runner.go:164] Run: docker start stopped-upgrade-951392
	I1107 23:34:40.048737  185092 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1107 23:34:40.057778  185092 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1107 23:34:40.091221  185092 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1107 23:34:40.118759  185092 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I1107 23:34:40.141385  185092 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I1107 23:34:40.193765  185092 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I1107 23:34:40.234798  185092 cache.go:157] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1107 23:34:40.234834  185092 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 372.735286ms
	I1107 23:34:40.234849  185092 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1107 23:34:40.234804  185092 cache.go:162] opening:  /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I1107 23:34:40.257362  185092 cli_runner.go:164] Run: docker container inspect stopped-upgrade-951392 --format={{.State.Status}}
	I1107 23:34:40.283845  185092 kic.go:430] container "stopped-upgrade-951392" state is running.
	I1107 23:34:40.284262  185092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-951392
	I1107 23:34:40.307674  185092 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/stopped-upgrade-951392/config.json ...
	I1107 23:34:40.308039  185092 machine.go:88] provisioning docker machine ...
	I1107 23:34:40.308072  185092 ubuntu.go:169] provisioning hostname "stopped-upgrade-951392"
	I1107 23:34:40.308153  185092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-951392
	I1107 23:34:40.331594  185092 main.go:141] libmachine: Using SSH client type: native
	I1107 23:34:40.334015  185092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32966 <nil> <nil>}
	I1107 23:34:40.334125  185092 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-951392 && echo "stopped-upgrade-951392" | sudo tee /etc/hostname
	I1107 23:34:40.334867  185092 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51232->127.0.0.1:32966: read: connection reset by peer
	I1107 23:34:40.565117  185092 cache.go:157] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1107 23:34:40.565146  185092 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 703.043436ms
	I1107 23:34:40.565162  185092 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1107 23:34:40.947066  185092 cache.go:157] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1107 23:34:40.947145  185092 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.085077355s
	I1107 23:34:40.947171  185092 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1107 23:34:41.044248  185092 cache.go:157] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1107 23:34:41.044289  185092 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.182383478s
	I1107 23:34:41.044306  185092 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1107 23:34:41.167710  185092 cache.go:157] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1107 23:34:41.167748  185092 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.305828971s
	I1107 23:34:41.167786  185092 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1107 23:34:41.430811  185092 cache.go:157] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1107 23:34:41.430838  185092 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.568993821s
	I1107 23:34:41.430850  185092 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1107 23:34:41.757802  185092 cache.go:157] /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1107 23:34:41.757839  185092 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.895865347s
	I1107 23:34:41.757856  185092 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1107 23:34:41.757877  185092 cache.go:87] Successfully saved all images to host disk.
	I1107 23:34:43.466134  185092 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-951392
	
	I1107 23:34:43.466229  185092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-951392
	I1107 23:34:43.489284  185092 main.go:141] libmachine: Using SSH client type: native
	I1107 23:34:43.490036  185092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32966 <nil> <nil>}
	I1107 23:34:43.490091  185092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-951392' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-951392/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-951392' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 23:34:43.612621  185092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 23:34:43.612662  185092 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17585-9432/.minikube CaCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17585-9432/.minikube}
	I1107 23:34:43.612718  185092 ubuntu.go:177] setting up certificates
	I1107 23:34:43.612730  185092 provision.go:83] configureAuth start
	I1107 23:34:43.612798  185092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-951392
	I1107 23:34:43.634094  185092 provision.go:138] copyHostCerts
	I1107 23:34:43.634185  185092 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem, removing ...
	I1107 23:34:43.634201  185092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem
	I1107 23:34:43.634274  185092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/ca.pem (1078 bytes)
	I1107 23:34:43.634409  185092 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem, removing ...
	I1107 23:34:43.634423  185092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem
	I1107 23:34:43.634463  185092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/cert.pem (1123 bytes)
	I1107 23:34:43.634554  185092 exec_runner.go:144] found /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem, removing ...
	I1107 23:34:43.634570  185092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem
	I1107 23:34:43.634607  185092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17585-9432/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17585-9432/.minikube/key.pem (1675 bytes)
	I1107 23:34:43.634688  185092 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-951392 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-951392]
	I1107 23:34:43.841487  185092 provision.go:172] copyRemoteCerts
	I1107 23:34:43.841553  185092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 23:34:43.841604  185092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-951392
	I1107 23:34:43.862007  185092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32966 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/stopped-upgrade-951392/id_rsa Username:docker}
	I1107 23:34:43.947623  185092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1107 23:34:43.966585  185092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 23:34:43.985889  185092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 23:34:44.003483  185092 provision.go:86] duration metric: configureAuth took 390.740686ms
	I1107 23:34:44.003509  185092 ubuntu.go:193] setting minikube options for container-runtime
	I1107 23:34:44.003692  185092 config.go:182] Loaded profile config "stopped-upgrade-951392": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1107 23:34:44.003803  185092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-951392
	I1107 23:34:44.023217  185092 main.go:141] libmachine: Using SSH client type: native
	I1107 23:34:44.023544  185092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32966 <nil> <nil>}
	I1107 23:34:44.023563  185092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1107 23:34:48.422475  185092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1107 23:34:48.422504  185092 machine.go:91] provisioned docker machine in 8.114446397s
	I1107 23:34:48.422516  185092 start.go:300] post-start starting for "stopped-upgrade-951392" (driver="docker")
	I1107 23:34:48.422529  185092 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 23:34:48.422599  185092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 23:34:48.422649  185092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-951392
	I1107 23:34:48.442692  185092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32966 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/stopped-upgrade-951392/id_rsa Username:docker}
	I1107 23:34:48.529754  185092 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 23:34:48.533516  185092 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 23:34:48.533536  185092 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 23:34:48.533544  185092 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 23:34:48.533550  185092 info.go:137] Remote host: Ubuntu 19.10
	I1107 23:34:48.533560  185092 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/addons for local assets ...
	I1107 23:34:48.533618  185092 filesync.go:126] Scanning /home/jenkins/minikube-integration/17585-9432/.minikube/files for local assets ...
	I1107 23:34:48.533699  185092 filesync.go:149] local asset: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem -> 162112.pem in /etc/ssl/certs
	I1107 23:34:48.533807  185092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 23:34:48.542046  185092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/ssl/certs/162112.pem --> /etc/ssl/certs/162112.pem (1708 bytes)
	I1107 23:34:48.559290  185092 start.go:303] post-start completed in 136.758658ms
	I1107 23:34:48.559362  185092 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:34:48.559401  185092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-951392
	I1107 23:34:48.577865  185092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32966 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/stopped-upgrade-951392/id_rsa Username:docker}
	I1107 23:34:48.660512  185092 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 23:34:48.664765  185092 fix.go:56] fixHost completed within 8.802270809s
	I1107 23:34:48.664790  185092 start.go:83] releasing machines lock for "stopped-upgrade-951392", held for 8.802315869s
	I1107 23:34:48.664867  185092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-951392
	I1107 23:34:48.683304  185092 ssh_runner.go:195] Run: cat /version.json
	I1107 23:34:48.683332  185092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 23:34:48.683362  185092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-951392
	I1107 23:34:48.683395  185092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-951392
	I1107 23:34:48.705101  185092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32966 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/stopped-upgrade-951392/id_rsa Username:docker}
	I1107 23:34:48.705854  185092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32966 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/stopped-upgrade-951392/id_rsa Username:docker}
	W1107 23:34:48.819336  185092 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1107 23:34:48.819413  185092 ssh_runner.go:195] Run: systemctl --version
	I1107 23:34:48.823851  185092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1107 23:34:48.872715  185092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 23:34:48.876951  185092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:34:48.896438  185092 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1107 23:34:48.896518  185092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1107 23:34:48.924139  185092 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1107 23:34:48.924164  185092 start.go:472] detecting cgroup driver to use...
	I1107 23:34:48.924210  185092 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 23:34:48.924260  185092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1107 23:34:48.948927  185092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1107 23:34:48.957963  185092 docker.go:203] disabling cri-docker service (if available) ...
	I1107 23:34:48.958015  185092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1107 23:34:48.966442  185092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1107 23:34:48.977888  185092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1107 23:34:48.990677  185092 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1107 23:34:48.990738  185092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1107 23:34:49.093954  185092 docker.go:219] disabling docker service ...
	I1107 23:34:49.094019  185092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1107 23:34:49.107675  185092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1107 23:34:49.121599  185092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1107 23:34:49.187178  185092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1107 23:34:49.275195  185092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1107 23:34:49.289339  185092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 23:34:49.316340  185092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1107 23:34:49.316408  185092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1107 23:34:49.330079  185092 out.go:177] 
	W1107 23:34:49.331752  185092 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1107 23:34:49.331796  185092 out.go:239] * 
	* 
	W1107 23:34:49.332849  185092 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 23:34:49.334458  185092 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-951392 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (114.01s)

                                                
                                    

Test pass (277/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 33.27
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.3/json-events 15.23
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.21
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
18 TestDownloadOnlyKic 1.29
19 TestBinaryMirror 0.76
20 TestOffline 80.03
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 145.68
27 TestAddons/parallel/Registry 15.61
29 TestAddons/parallel/InspektorGadget 10.65
30 TestAddons/parallel/MetricsServer 5.66
31 TestAddons/parallel/HelmTiller 11.93
33 TestAddons/parallel/CSI 68.82
34 TestAddons/parallel/Headlamp 18.21
35 TestAddons/parallel/CloudSpanner 5.5
36 TestAddons/parallel/LocalPath 55.93
37 TestAddons/parallel/NvidiaDevicePlugin 5.49
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/StoppedEnableDisable 12.2
42 TestCertOptions 28.2
43 TestCertExpiration 236.65
45 TestForceSystemdFlag 36.76
46 TestForceSystemdEnv 36.23
48 TestKVMDriverInstallOrUpdate 5.13
52 TestErrorSpam/setup 21.52
53 TestErrorSpam/start 0.65
54 TestErrorSpam/status 0.88
55 TestErrorSpam/pause 1.53
56 TestErrorSpam/unpause 1.49
57 TestErrorSpam/stop 1.42
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 69.85
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 25.57
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.07
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.6
69 TestFunctional/serial/CacheCmd/cache/add_local 1.98
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
74 TestFunctional/serial/CacheCmd/cache/delete 0.13
75 TestFunctional/serial/MinikubeKubectlCmd 0.13
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 33.16
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 1.36
80 TestFunctional/serial/LogsFileCmd 1.38
81 TestFunctional/serial/InvalidService 4
83 TestFunctional/parallel/ConfigCmd 0.44
84 TestFunctional/parallel/DashboardCmd 10.88
85 TestFunctional/parallel/DryRun 0.45
86 TestFunctional/parallel/InternationalLanguage 0.38
87 TestFunctional/parallel/StatusCmd 0.93
91 TestFunctional/parallel/ServiceCmdConnect 6.54
92 TestFunctional/parallel/AddonsCmd 0.28
93 TestFunctional/parallel/PersistentVolumeClaim 45.24
95 TestFunctional/parallel/SSHCmd 0.52
96 TestFunctional/parallel/CpCmd 1.3
97 TestFunctional/parallel/MySQL 24.32
98 TestFunctional/parallel/FileSync 0.3
99 TestFunctional/parallel/CertSync 1.94
103 TestFunctional/parallel/NodeLabels 0.09
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
108 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
109 TestFunctional/parallel/Version/short 0.08
110 TestFunctional/parallel/Version/components 0.51
111 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
112 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
113 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
114 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
115 TestFunctional/parallel/ImageCommands/ImageBuild 3.19
116 TestFunctional/parallel/ImageCommands/Setup 1.97
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.86
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.27
122 TestFunctional/parallel/ServiceCmd/List 0.51
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.35
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
126 TestFunctional/parallel/ServiceCmd/Format 0.63
127 TestFunctional/parallel/ServiceCmd/URL 0.75
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
129 TestFunctional/parallel/ProfileCmd/profile_list 0.37
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
131 TestFunctional/parallel/MountCmd/any-port 16.23
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.85
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.06
136 TestFunctional/parallel/MountCmd/specific-port 1.79
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.01
139 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
140 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.28
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/delete_addon-resizer_images 0.31
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 95.03
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.81
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.57
162 TestJSONOutput/start/Command 38.7
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.68
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.6
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.73
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.24
187 TestKicCustomNetwork/create_custom_network 41.45
188 TestKicCustomNetwork/use_default_bridge_network 25.51
189 TestKicExistingNetwork 26.88
190 TestKicCustomSubnet 27.03
191 TestKicStaticIP 28.82
192 TestMainNoArgs 0.07
193 TestMinikubeProfile 51.06
196 TestMountStart/serial/StartWithMountFirst 8.31
197 TestMountStart/serial/VerifyMountFirst 0.25
198 TestMountStart/serial/StartWithMountSecond 5.75
199 TestMountStart/serial/VerifyMountSecond 0.25
200 TestMountStart/serial/DeleteFirst 1.62
201 TestMountStart/serial/VerifyMountPostDelete 0.25
202 TestMountStart/serial/Stop 1.21
203 TestMountStart/serial/RestartStopped 7.36
204 TestMountStart/serial/VerifyMountPostStop 0.26
207 TestMultiNode/serial/FreshStart2Nodes 53.78
208 TestMultiNode/serial/DeployApp2Nodes 5.38
210 TestMultiNode/serial/AddNode 48.22
211 TestMultiNode/serial/ProfileList 0.27
212 TestMultiNode/serial/CopyFile 9.22
213 TestMultiNode/serial/StopNode 2.15
214 TestMultiNode/serial/StartAfterStop 11.03
215 TestMultiNode/serial/RestartKeepsNodes 113.38
216 TestMultiNode/serial/DeleteNode 4.7
217 TestMultiNode/serial/StopMultiNode 23.88
218 TestMultiNode/serial/RestartMultiNode 73.15
219 TestMultiNode/serial/ValidateNameConflict 26.23
224 TestPreload 179.16
226 TestScheduledStopUnix 96.58
229 TestInsufficientStorage 13.22
232 TestKubernetesUpgrade 385.89
233 TestMissingContainerUpgrade 159.08
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
239 TestNoKubernetes/serial/StartWithK8s 32.39
244 TestNetworkPlugins/group/false 8.17
248 TestStoppedBinaryUpgrade/Setup 2.08
250 TestNoKubernetes/serial/StartWithStopK8s 8.31
251 TestNoKubernetes/serial/Start 11.09
252 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
253 TestNoKubernetes/serial/ProfileList 1.15
254 TestNoKubernetes/serial/Stop 1.31
255 TestNoKubernetes/serial/StartNoArgs 7.96
256 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
264 TestStoppedBinaryUpgrade/MinikubeLogs 0.67
266 TestPause/serial/Start 42.47
267 TestNetworkPlugins/group/auto/Start 40.8
268 TestPause/serial/SecondStartNoReconfiguration 47.51
269 TestNetworkPlugins/group/auto/KubeletFlags 0.3
270 TestNetworkPlugins/group/auto/NetCatPod 9.27
271 TestNetworkPlugins/group/auto/DNS 0.17
272 TestNetworkPlugins/group/auto/Localhost 0.16
273 TestNetworkPlugins/group/auto/HairPin 0.15
274 TestNetworkPlugins/group/kindnet/Start 69.92
275 TestPause/serial/Pause 0.81
276 TestPause/serial/VerifyStatus 0.3
277 TestPause/serial/Unpause 0.64
278 TestPause/serial/PauseAgain 0.82
279 TestPause/serial/DeletePaused 2.75
280 TestPause/serial/VerifyDeletedResources 0.59
281 TestNetworkPlugins/group/calico/Start 70.22
282 TestNetworkPlugins/group/custom-flannel/Start 62.59
283 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
284 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
285 TestNetworkPlugins/group/kindnet/NetCatPod 10.3
286 TestNetworkPlugins/group/calico/ControllerPod 5.02
287 TestNetworkPlugins/group/kindnet/DNS 0.17
288 TestNetworkPlugins/group/kindnet/Localhost 0.14
289 TestNetworkPlugins/group/kindnet/HairPin 0.14
290 TestNetworkPlugins/group/calico/KubeletFlags 0.29
291 TestNetworkPlugins/group/calico/NetCatPod 11.37
292 TestNetworkPlugins/group/calico/DNS 0.16
293 TestNetworkPlugins/group/calico/Localhost 0.17
294 TestNetworkPlugins/group/calico/HairPin 0.18
295 TestNetworkPlugins/group/enable-default-cni/Start 35.74
296 TestNetworkPlugins/group/flannel/Start 64.99
297 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
298 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.35
299 TestNetworkPlugins/group/custom-flannel/DNS 0.16
300 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
301 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
302 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
303 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
304 TestNetworkPlugins/group/enable-default-cni/DNS 32.53
305 TestNetworkPlugins/group/bridge/Start 42.85
307 TestStartStop/group/old-k8s-version/serial/FirstStart 139.37
308 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
309 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
310 TestNetworkPlugins/group/flannel/ControllerPod 5.02
311 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
312 TestNetworkPlugins/group/flannel/NetCatPod 11.31
313 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
314 TestNetworkPlugins/group/bridge/NetCatPod 9.31
315 TestNetworkPlugins/group/flannel/DNS 0.19
316 TestNetworkPlugins/group/flannel/Localhost 0.15
317 TestNetworkPlugins/group/flannel/HairPin 0.16
319 TestStartStop/group/no-preload/serial/FirstStart 72.81
320 TestNetworkPlugins/group/bridge/DNS 32.74
322 TestStartStop/group/embed-certs/serial/FirstStart 70.79
323 TestNetworkPlugins/group/bridge/Localhost 0.15
324 TestNetworkPlugins/group/bridge/HairPin 0.14
326 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.64
327 TestStartStop/group/no-preload/serial/DeployApp 10.37
328 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.93
329 TestStartStop/group/no-preload/serial/Stop 12.03
330 TestStartStop/group/embed-certs/serial/DeployApp 10.34
331 TestStartStop/group/old-k8s-version/serial/DeployApp 8.42
332 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
333 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.93
334 TestStartStop/group/no-preload/serial/SecondStart 340.96
335 TestStartStop/group/old-k8s-version/serial/Stop 12.04
336 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
337 TestStartStop/group/embed-certs/serial/Stop 11.98
338 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
339 TestStartStop/group/old-k8s-version/serial/SecondStart 430.3
340 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
341 TestStartStop/group/embed-certs/serial/SecondStart 346.52
342 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.42
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
344 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.27
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
346 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 342.46
347 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.02
348 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
349 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.46
350 TestStartStop/group/no-preload/serial/Pause 3.24
351 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 19.02
353 TestStartStop/group/newest-cni/serial/FirstStart 40.31
354 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
355 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
356 TestStartStop/group/embed-certs/serial/Pause 3.1
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.02
358 TestStartStop/group/newest-cni/serial/DeployApp 0
359 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
360 TestStartStop/group/newest-cni/serial/Stop 1.91
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
362 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
363 TestStartStop/group/newest-cni/serial/SecondStart 25.55
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
365 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.8
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
369 TestStartStop/group/newest-cni/serial/Pause 2.55
370 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
371 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
372 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
373 TestStartStop/group/old-k8s-version/serial/Pause 2.72
x
+
TestDownloadOnly/v1.16.0/json-events (33.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-778371 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-778371 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (33.272683453s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (33.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-778371
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-778371: exit status 85 (78.110067ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-778371 | jenkins | v1.32.0 | 07 Nov 23 23:00 UTC |          |
	|         | -p download-only-778371        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:00:58
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:00:58.407386   16222 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:00:58.407493   16222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:00:58.407501   16222 out.go:309] Setting ErrFile to fd 2...
	I1107 23:00:58.407506   16222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:00:58.407690   16222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	W1107 23:00:58.407829   16222 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17585-9432/.minikube/config/config.json: open /home/jenkins/minikube-integration/17585-9432/.minikube/config/config.json: no such file or directory
	I1107 23:00:58.408398   16222 out.go:303] Setting JSON to true
	I1107 23:00:58.409229   16222 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2609,"bootTime":1699395450,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:00:58.409291   16222 start.go:138] virtualization: kvm guest
	I1107 23:00:58.411958   16222 out.go:97] [download-only-778371] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:00:58.413890   16222 out.go:169] MINIKUBE_LOCATION=17585
	W1107 23:00:58.412072   16222 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball: no such file or directory
	I1107 23:00:58.412112   16222 notify.go:220] Checking for updates...
	I1107 23:00:58.417143   16222 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:00:58.418862   16222 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:00:58.420413   16222 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	I1107 23:00:58.421914   16222 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1107 23:00:58.424851   16222 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 23:00:58.425068   16222 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:00:58.448213   16222 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:00:58.448306   16222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:00:58.812338   16222 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-11-07 23:00:58.803412344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:00:58.812457   16222 docker.go:295] overlay module found
	I1107 23:00:58.814440   16222 out.go:97] Using the docker driver based on user configuration
	I1107 23:00:58.814474   16222 start.go:298] selected driver: docker
	I1107 23:00:58.814487   16222 start.go:902] validating driver "docker" against <nil>
	I1107 23:00:58.814579   16222 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:00:58.865071   16222 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-11-07 23:00:58.856654065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:00:58.865210   16222 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 23:00:58.865713   16222 start_flags.go:394] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1107 23:00:58.865859   16222 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 23:00:58.867933   16222 out.go:169] Using Docker driver with root privileges
	I1107 23:00:58.869410   16222 cni.go:84] Creating CNI manager for ""
	I1107 23:00:58.869432   16222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:00:58.869442   16222 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 23:00:58.869454   16222 start_flags.go:323] config:
	{Name:download-only-778371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-778371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:00:58.871035   16222 out.go:97] Starting control plane node download-only-778371 in cluster download-only-778371
	I1107 23:00:58.871058   16222 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:00:58.872286   16222 out.go:97] Pulling base image ...
	I1107 23:00:58.872312   16222 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1107 23:00:58.872447   16222 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:00:58.887607   16222 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:00:58.887819   16222 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1107 23:00:58.887927   16222 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:00:58.981258   16222 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1107 23:00:58.981289   16222 cache.go:56] Caching tarball of preloaded images
	I1107 23:00:58.981471   16222 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1107 23:00:58.983537   16222 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1107 23:00:58.983554   16222 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:00:59.092863   16222 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1107 23:01:13.324708   16222 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:01:13.324793   16222 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:01:14.232560   16222 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1107 23:01:14.232897   16222 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/download-only-778371/config.json ...
	I1107 23:01:14.232926   16222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/download-only-778371/config.json: {Name:mk2bc3a014038fc889856e16cafa0fa565ef6b96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 23:01:14.233101   16222 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1107 23:01:14.233300   16222 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I1107 23:01:20.284415   16222 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-778371"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (15.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-778371 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-778371 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.230472272s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (15.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-778371
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-778371: exit status 85 (74.207678ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-778371 | jenkins | v1.32.0 | 07 Nov 23 23:00 UTC |          |
	|         | -p download-only-778371        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-778371 | jenkins | v1.32.0 | 07 Nov 23 23:01 UTC |          |
	|         | -p download-only-778371        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 23:01:31
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 23:01:31.762539   16435 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:01:31.762786   16435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:01:31.762794   16435 out.go:309] Setting ErrFile to fd 2...
	I1107 23:01:31.762799   16435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:01:31.762970   16435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	W1107 23:01:31.763084   16435 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17585-9432/.minikube/config/config.json: open /home/jenkins/minikube-integration/17585-9432/.minikube/config/config.json: no such file or directory
	I1107 23:01:31.763491   16435 out.go:303] Setting JSON to true
	I1107 23:01:31.764304   16435 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2642,"bootTime":1699395450,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:01:31.764359   16435 start.go:138] virtualization: kvm guest
	I1107 23:01:31.766472   16435 out.go:97] [download-only-778371] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:01:31.768274   16435 out.go:169] MINIKUBE_LOCATION=17585
	I1107 23:01:31.766681   16435 notify.go:220] Checking for updates...
	I1107 23:01:31.771640   16435 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:01:31.773320   16435 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:01:31.775144   16435 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	I1107 23:01:31.776882   16435 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1107 23:01:31.780386   16435 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 23:01:31.780896   16435 config.go:182] Loaded profile config "download-only-778371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1107 23:01:31.780937   16435 start.go:810] api.Load failed for download-only-778371: filestore "download-only-778371": Docker machine "download-only-778371" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 23:01:31.781012   16435 driver.go:378] Setting default libvirt URI to qemu:///system
	W1107 23:01:31.781043   16435 start.go:810] api.Load failed for download-only-778371: filestore "download-only-778371": Docker machine "download-only-778371" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 23:01:31.803263   16435 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:01:31.803383   16435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:01:31.862210   16435 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-11-07 23:01:31.85392343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:01:31.862362   16435 docker.go:295] overlay module found
	I1107 23:01:31.864076   16435 out.go:97] Using the docker driver based on existing profile
	I1107 23:01:31.864111   16435 start.go:298] selected driver: docker
	I1107 23:01:31.864119   16435 start.go:902] validating driver "docker" against &{Name:download-only-778371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-778371 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:01:31.864271   16435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:01:31.918370   16435 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-11-07 23:01:31.910447033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:01:31.918963   16435 cni.go:84] Creating CNI manager for ""
	I1107 23:01:31.918980   16435 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1107 23:01:31.918991   16435 start_flags.go:323] config:
	{Name:download-only-778371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-778371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:01:31.921130   16435 out.go:97] Starting control plane node download-only-778371 in cluster download-only-778371
	I1107 23:01:31.921154   16435 cache.go:121] Beginning downloading kic base image for docker with crio
	I1107 23:01:31.922768   16435 out.go:97] Pulling base image ...
	I1107 23:01:31.922797   16435 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:01:31.922905   16435 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 23:01:31.938343   16435 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 23:01:31.938473   16435 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1107 23:01:31.938493   16435 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
	I1107 23:01:31.938501   16435 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
	I1107 23:01:31.938508   16435 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1107 23:01:32.024126   16435 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1107 23:01:32.024151   16435 cache.go:56] Caching tarball of preloaded images
	I1107 23:01:32.024336   16435 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:01:32.026643   16435 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1107 23:01:32.026670   16435 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:01:32.141496   16435 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1107 23:01:45.102712   16435 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:01:45.102808   16435 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17585-9432/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1107 23:01:46.046755   16435 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1107 23:01:46.046875   16435 profile.go:148] Saving config to /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/download-only-778371/config.json ...
	I1107 23:01:46.047070   16435 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1107 23:01:46.047231   16435 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17585-9432/.minikube/cache/linux/amd64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-778371"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-778371
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.29s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-849450 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-849450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-849450
--- PASS: TestDownloadOnlyKic (1.29s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-173782 --alsologtostderr --binary-mirror http://127.0.0.1:41013 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-173782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-173782
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (80.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-686073 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-686073 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m17.31343581s)
helpers_test.go:175: Cleaning up "offline-crio-686073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-686073
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-686073: (2.714054122s)
--- PASS: TestOffline (80.03s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-890770
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-890770: exit status 85 (68.027023ms)

                                                
                                                
-- stdout --
	* Profile "addons-890770" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-890770"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-890770
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-890770: exit status 85 (69.698011ms)

                                                
                                                
-- stdout --
	* Profile "addons-890770" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-890770"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (145.68s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-890770 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-890770 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.676819585s)
--- PASS: TestAddons/Setup (145.68s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 14.478404ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9wd9z" [4d059836-b855-4bb6-b803-c0168e7c81ac] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013236006s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jgztn" [5a51acc2-8ab5-4ee4-bf7b-ba4efd67d0bf] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011880023s
addons_test.go:339: (dbg) Run:  kubectl --context addons-890770 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-890770 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-890770 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.772236425s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 ip
2023/11/07 23:04:30 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-stkl9" [4582d62a-637f-4271-a799-86cb27e40029] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012256239s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-890770
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-890770: (5.632239983s)
--- PASS: TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 3.152023ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-7ggdv" [2b50a7aa-9578-4b46-a1fc-223b5c78a661] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011169042s
addons_test.go:414: (dbg) Run:  kubectl --context addons-890770 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.66s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.22079ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-gwrk4" [0df13ec3-b6cb-4894-b86a-d25f8f3bc106] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010667118s
addons_test.go:472: (dbg) Run:  kubectl --context addons-890770 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-890770 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.702714427s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p addons-890770 addons disable helm-tiller --alsologtostderr -v=1: (1.207592131s)
--- PASS: TestAddons/parallel/HelmTiller (11.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 18.770935ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-890770 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-890770 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [48357eaa-39dc-42e0-ac22-f80fc8bb2f19] Pending
helpers_test.go:344: "task-pv-pod" [48357eaa-39dc-42e0-ac22-f80fc8bb2f19] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [48357eaa-39dc-42e0-ac22-f80fc8bb2f19] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.012505032s
addons_test.go:583: (dbg) Run:  kubectl --context addons-890770 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-890770 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-890770 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-890770 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-890770 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-890770 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-890770 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-890770 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8e7a425c-78c6-42c0-ae1d-03b0ef958af9] Pending
helpers_test.go:344: "task-pv-pod-restore" [8e7a425c-78c6-42c0-ae1d-03b0ef958af9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8e7a425c-78c6-42c0-ae1d-03b0ef958af9] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.009407151s
addons_test.go:625: (dbg) Run:  kubectl --context addons-890770 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-890770 delete pod task-pv-pod-restore: (1.085108993s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-890770 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-890770 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-890770 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.923270823s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-890770 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-890770 --alsologtostderr -v=1: (1.195607922s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-h789z" [e2275ab1-5a79-4c26-9b7d-642aa32e500c] Pending
helpers_test.go:344: "headlamp-94b766c-h789z" [e2275ab1-5a79-4c26-9b7d-642aa32e500c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-h789z" [e2275ab1-5a79-4c26-9b7d-642aa32e500c] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.011760269s
--- PASS: TestAddons/parallel/Headlamp (18.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-zqw54" [59de1830-7b21-4f96-820d-80dcd9ec4354] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008774856s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-890770
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-890770 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-890770 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-890770 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [559f2731-866b-4592-a31b-b51ca53d0ca7] Pending
helpers_test.go:344: "test-local-path" [559f2731-866b-4592-a31b-b51ca53d0ca7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [559f2731-866b-4592-a31b-b51ca53d0ca7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [559f2731-866b-4592-a31b-b51ca53d0ca7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00982513s
addons_test.go:890: (dbg) Run:  kubectl --context addons-890770 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 ssh "cat /opt/local-path-provisioner/pvc-fbfec044-5f57-4c8e-aafa-666d902b4ff6_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-890770 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-890770 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-890770 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-890770 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.350953509s)
--- PASS: TestAddons/parallel/LocalPath (55.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zfkgl" [2205ac04-9181-4a44-a293-1022552e9e82] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.010595213s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-890770
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-890770 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-890770 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-890770
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-890770: (11.898333496s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-890770
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-890770
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-890770
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (28.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-144104 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1107 23:35:53.401779   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-144104 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.606894238s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-144104 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-144104 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-144104 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-144104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-144104
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-144104: (1.987368775s)
--- PASS: TestCertOptions (28.20s)

                                                
                                    
x
+
TestCertExpiration (236.65s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-877066 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-877066 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.927351952s)
E1107 23:35:01.655666   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-877066 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-877066 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.353734519s)
helpers_test.go:175: Cleaning up "cert-expiration-877066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-877066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-877066: (2.371234557s)
--- PASS: TestCertExpiration (236.65s)

                                                
                                    
x
+
TestForceSystemdFlag (36.76s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-712135 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1107 23:34:15.302763   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-712135 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.739118385s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-712135 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-712135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-712135
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-712135: (2.725085964s)
--- PASS: TestForceSystemdFlag (36.76s)

                                                
                                    
x
+
TestForceSystemdEnv (36.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-846436 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-846436 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.449406641s)
helpers_test.go:175: Cleaning up "force-systemd-env-846436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-846436
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-846436: (3.777258924s)
--- PASS: TestForceSystemdEnv (36.23s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.13s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.13s)

                                                
                                    
x
+
TestErrorSpam/setup (21.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-347711 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-347711 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-347711 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-347711 --driver=docker  --container-runtime=crio: (21.517327082s)
--- PASS: TestErrorSpam/setup (21.52s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 stop: (1.209454303s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347711 --log_dir /tmp/nospam-347711 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17585-9432/.minikube/files/etc/test/nested/copy/16211/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773400 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1107 23:09:15.303947   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:15.309764   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:15.320029   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:15.340353   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:15.380687   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:15.460983   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:15.621411   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:15.941979   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:16.582902   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:17.863127   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:20.425010   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:25.545506   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:09:35.786165   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-773400 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.846907417s)
--- PASS: TestFunctional/serial/StartWithProxy (69.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773400 --alsologtostderr -v=8
E1107 23:09:56.267005   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-773400 --alsologtostderr -v=8: (25.564433166s)
functional_test.go:659: soft start took 25.565158341s for "functional-773400" cluster.
--- PASS: TestFunctional/serial/SoftStart (25.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-773400 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-773400 /tmp/TestFunctionalserialCacheCmdcacheadd_local146086947/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 cache add minikube-local-cache-test:functional-773400
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-773400 cache add minikube-local-cache-test:functional-773400: (1.613415703s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 cache delete minikube-local-cache-test:functional-773400
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-773400
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.483407ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 kubectl -- --context functional-773400 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-773400 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1107 23:10:37.227464   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-773400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.164523462s)
functional_test.go:757: restart took 33.164655962s for "functional-773400" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-773400 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-773400 logs: (1.364621903s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 logs --file /tmp/TestFunctionalserialLogsFileCmd3436269878/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-773400 logs --file /tmp/TestFunctionalserialLogsFileCmd3436269878/001/logs.txt: (1.378297465s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-773400 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-773400
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-773400: exit status 115 (339.162927ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30133 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-773400 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773400 config get cpus: exit status 14 (84.497758ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773400 config get cpus: exit status 14 (64.644498ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-773400 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-773400 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 52866: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773400 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-773400 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (198.609471ms)

                                                
                                                
-- stdout --
	* [functional-773400] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:11:21.422278   52731 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:11:21.422604   52731 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:11:21.422618   52731 out.go:309] Setting ErrFile to fd 2...
	I1107 23:11:21.422626   52731 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:11:21.422904   52731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	I1107 23:11:21.423558   52731 out.go:303] Setting JSON to false
	I1107 23:11:21.425099   52731 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3232,"bootTime":1699395450,"procs":512,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:11:21.425163   52731 start.go:138] virtualization: kvm guest
	I1107 23:11:21.427969   52731 out.go:177] * [functional-773400] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:11:21.429633   52731 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:11:21.429729   52731 notify.go:220] Checking for updates...
	I1107 23:11:21.431419   52731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:11:21.433457   52731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:11:21.435093   52731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	I1107 23:11:21.437541   52731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:11:21.438948   52731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:11:21.441328   52731 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:11:21.442019   52731 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:11:21.468607   52731 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:11:21.468717   52731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:11:21.547559   52731 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-11-07 23:11:21.539069194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:11:21.547653   52731 docker.go:295] overlay module found
	I1107 23:11:21.549930   52731 out.go:177] * Using the docker driver based on existing profile
	I1107 23:11:21.551464   52731 start.go:298] selected driver: docker
	I1107 23:11:21.551487   52731 start.go:902] validating driver "docker" against &{Name:functional-773400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-773400 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:11:21.551594   52731 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:11:21.554163   52731 out.go:177] 
	W1107 23:11:21.555929   52731 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1107 23:11:21.557294   52731 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773400 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773400 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-773400 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (379.800059ms)

                                                
                                                
-- stdout --
	* [functional-773400] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:11:06.814763   50239 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:11:06.814925   50239 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:11:06.814941   50239 out.go:309] Setting ErrFile to fd 2...
	I1107 23:11:06.814945   50239 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:11:06.815281   50239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	I1107 23:11:06.837351   50239 out.go:303] Setting JSON to false
	I1107 23:11:06.838844   50239 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3217,"bootTime":1699395450,"procs":490,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:11:06.838946   50239 start.go:138] virtualization: kvm guest
	I1107 23:11:06.907725   50239 out.go:177] * [functional-773400] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1107 23:11:06.926360   50239 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:11:06.926263   50239 notify.go:220] Checking for updates...
	I1107 23:11:06.949373   50239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:11:06.951572   50239 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:11:06.987057   50239 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	I1107 23:11:06.997538   50239 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:11:06.999841   50239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:11:07.002435   50239 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:11:07.003398   50239 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:11:07.026673   50239 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:11:07.026759   50239 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:11:07.110510   50239 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:50 SystemTime:2023-11-07 23:11:07.100195453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:11:07.110603   50239 docker.go:295] overlay module found
	I1107 23:11:07.115488   50239 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1107 23:11:07.117209   50239 start.go:298] selected driver: docker
	I1107 23:11:07.117235   50239 start.go:902] validating driver "docker" against &{Name:functional-773400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-773400 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 23:11:07.117349   50239 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:11:07.119865   50239 out.go:177] 
	W1107 23:11:07.121493   50239 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1107 23:11:07.123068   50239 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-773400 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-773400 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-mpn65" [ebea705c-da7f-4705-a402-872f58edc490] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-mpn65" [ebea705c-da7f-4705-a402-872f58edc490] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.011746783s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31174
functional_test.go:1674: http://192.168.49.2:31174: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-mpn65

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31174
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ac489b04-0fc3-43d3-b98a-8c0a82235b99] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.020121287s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-773400 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-773400 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-773400 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-773400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b829dd9a-4b2f-4048-98fa-eb32fd2436e9] Pending
helpers_test.go:344: "sp-pod" [b829dd9a-4b2f-4048-98fa-eb32fd2436e9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b829dd9a-4b2f-4048-98fa-eb32fd2436e9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.011089324s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-773400 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-773400 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-773400 delete -f testdata/storage-provisioner/pod.yaml: (1.15251758s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-773400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6ef82bd1-6708-40c3-857e-6fa4d9a63c05] Pending
helpers_test.go:344: "sp-pod" [6ef82bd1-6708-40c3-857e-6fa4d9a63c05] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6ef82bd1-6708-40c3-857e-6fa4d9a63c05] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.020820482s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-773400 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh -n functional-773400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 cp functional-773400:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3773017870/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh -n functional-773400 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-773400 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-vxjck" [f1be7be4-c1bc-4211-9b94-83e6a89942c8] Pending
helpers_test.go:344: "mysql-859648c796-vxjck" [f1be7be4-c1bc-4211-9b94-83e6a89942c8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-vxjck" [f1be7be4-c1bc-4211-9b94-83e6a89942c8] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.018784781s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-773400 exec mysql-859648c796-vxjck -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-773400 exec mysql-859648c796-vxjck -- mysql -ppassword -e "show databases;": exit status 1 (213.129164ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-773400 exec mysql-859648c796-vxjck -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-773400 exec mysql-859648c796-vxjck -- mysql -ppassword -e "show databases;": exit status 1 (157.839075ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-773400 exec mysql-859648c796-vxjck -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16211/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "sudo cat /etc/test/nested/copy/16211/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16211.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "sudo cat /etc/ssl/certs/16211.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16211.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "sudo cat /usr/share/ca-certificates/16211.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/162112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "sudo cat /etc/ssl/certs/162112.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/162112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "sudo cat /usr/share/ca-certificates/162112.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-773400 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773400 ssh "sudo systemctl is-active docker": exit status 1 (299.029484ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773400 ssh "sudo systemctl is-active containerd": exit status 1 (362.270702ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-773400 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-773400 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-vfjct" [aea0e2a8-d8c4-4244-905c-40af1bc35f72] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-vfjct" [aea0e2a8-d8c4-4244-905c-40af1bc35f72] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.012889437s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773400 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-773400
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773400 image ls --format short --alsologtostderr:
I1107 23:11:29.936790   55510 out.go:296] Setting OutFile to fd 1 ...
I1107 23:11:29.936945   55510 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:11:29.936955   55510 out.go:309] Setting ErrFile to fd 2...
I1107 23:11:29.936960   55510 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:11:29.937202   55510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
I1107 23:11:29.937831   55510 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:11:29.937945   55510 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:11:29.938372   55510 cli_runner.go:164] Run: docker container inspect functional-773400 --format={{.State.Status}}
I1107 23:11:29.955442   55510 ssh_runner.go:195] Run: systemctl --version
I1107 23:11:29.955497   55510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773400
I1107 23:11:29.972241   55510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/functional-773400/id_rsa Username:docker}
I1107 23:11:30.056380   55510 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773400 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 10baa1ca17068 | 123MB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | latest             | c20060033e06f | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 5374347291230 | 127MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-773400  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.28.3            | bfc896cf80fba | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 6d1b4fd1b182d | 61.5MB |
| docker.io/library/mysql                 | 5.7                | 547b3c3c15a96 | 520MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773400 image ls --format table --alsologtostderr:
I1107 23:11:31.269100   55800 out.go:296] Setting OutFile to fd 1 ...
I1107 23:11:31.269288   55800 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:11:31.269301   55800 out.go:309] Setting ErrFile to fd 2...
I1107 23:11:31.269309   55800 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:11:31.269548   55800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
I1107 23:11:31.270189   55800 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:11:31.270408   55800 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:11:31.270849   55800 cli_runner.go:164] Run: docker container inspect functional-773400 --format={{.State.Status}}
I1107 23:11:31.289625   55800 ssh_runner.go:195] Run: systemctl --version
I1107 23:11:31.289691   55800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773400
I1107 23:11:31.309322   55800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/functional-773400/id_rsa Username:docker}
I1107 23:11:31.396596   55800 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773400 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6","repoDigests":["docker.io/library/mysql@sha256:444e015ba2ad9fc0884a82cef6c3b15f89db003aef11b55e4daca24f55538cb9","docker.io/library/mysql@sha256:880063e8acda81825f0b946eff47c45235840480da03e71a22113ebafe166a3d"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519576537"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c
441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-773400"],"size":"34114467"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e4950
2b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":["registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"74691991"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{
"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"123188534"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":["registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2c
e268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"127165392"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"61498678"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.i
o/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647","repoDigests":["docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6","docker.io/library/nginx@sha256:d2e65182b5fd330470eca9b8e23e8a1a0d87cc9b820eb1fb3f034bf8248d37ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773400 image ls --format json --alsologtostderr:
I1107 23:11:31.033445   55756 out.go:296] Setting OutFile to fd 1 ...
I1107 23:11:31.033614   55756 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:11:31.033627   55756 out.go:309] Setting ErrFile to fd 2...
I1107 23:11:31.033635   55756 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:11:31.033926   55756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
I1107 23:11:31.034747   55756 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:11:31.034911   55756 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:11:31.035505   55756 cli_runner.go:164] Run: docker container inspect functional-773400 --format={{.State.Status}}
I1107 23:11:31.055448   55756 ssh_runner.go:195] Run: systemctl --version
I1107 23:11:31.055504   55756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773400
I1107 23:11:31.076680   55756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/functional-773400/id_rsa Username:docker}
I1107 23:11:31.164466   55756 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773400 image ls --format yaml --alsologtostderr:
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "61498678"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6
repoDigests:
- docker.io/library/mysql@sha256:444e015ba2ad9fc0884a82cef6c3b15f89db003aef11b55e4daca24f55538cb9
- docker.io/library/mysql@sha256:880063e8acda81825f0b946eff47c45235840480da03e71a22113ebafe166a3d
repoTags:
- docker.io/library/mysql:5.7
size: "519576537"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "127165392"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "123188534"
- id: c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
repoDigests:
- docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6
- docker.io/library/nginx@sha256:d2e65182b5fd330470eca9b8e23e8a1a0d87cc9b820eb1fb3f034bf8248d37ee
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-773400
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "74691991"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773400 image ls --format yaml --alsologtostderr:
I1107 23:11:30.154310   55557 out.go:296] Setting OutFile to fd 1 ...
I1107 23:11:30.154649   55557 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:11:30.154662   55557 out.go:309] Setting ErrFile to fd 2...
I1107 23:11:30.154670   55557 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:11:30.154966   55557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
I1107 23:11:30.155783   55557 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:11:30.155934   55557 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:11:30.156478   55557 cli_runner.go:164] Run: docker container inspect functional-773400 --format={{.State.Status}}
I1107 23:11:30.173236   55557 ssh_runner.go:195] Run: systemctl --version
I1107 23:11:30.173297   55557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773400
I1107 23:11:30.189872   55557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/functional-773400/id_rsa Username:docker}
I1107 23:11:30.272269   55557 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773400 ssh pgrep buildkitd: exit status 1 (257.162407ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image build -t localhost/my-image:functional-773400 testdata/build --alsologtostderr
2023/11/07 23:11:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-773400 image build -t localhost/my-image:functional-773400 testdata/build --alsologtostderr: (2.70821319s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773400 image build -t localhost/my-image:functional-773400 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 92c319655e3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-773400
--> 1d0423fe62f
Successfully tagged localhost/my-image:functional-773400
1d0423fe62f0b034e9b281cd3d23c167c80d01b6e67a5bff972343023637e36e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773400 image build -t localhost/my-image:functional-773400 testdata/build --alsologtostderr:
I1107 23:11:30.624570   55682 out.go:296] Setting OutFile to fd 1 ...
I1107 23:11:30.624724   55682 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:11:30.624734   55682 out.go:309] Setting ErrFile to fd 2...
I1107 23:11:30.624738   55682 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 23:11:30.624976   55682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
I1107 23:11:30.625598   55682 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:11:30.626195   55682 config.go:182] Loaded profile config "functional-773400": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1107 23:11:30.626610   55682 cli_runner.go:164] Run: docker container inspect functional-773400 --format={{.State.Status}}
I1107 23:11:30.643489   55682 ssh_runner.go:195] Run: systemctl --version
I1107 23:11:30.643574   55682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-773400
I1107 23:11:30.660662   55682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/functional-773400/id_rsa Username:docker}
I1107 23:11:30.744225   55682 build_images.go:151] Building image from path: /tmp/build.2835616235.tar
I1107 23:11:30.744317   55682 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1107 23:11:30.754078   55682 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2835616235.tar
I1107 23:11:30.757930   55682 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2835616235.tar: stat -c "%s %y" /var/lib/minikube/build/build.2835616235.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2835616235.tar': No such file or directory
I1107 23:11:30.757965   55682 ssh_runner.go:362] scp /tmp/build.2835616235.tar --> /var/lib/minikube/build/build.2835616235.tar (3072 bytes)
I1107 23:11:30.781991   55682 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2835616235
I1107 23:11:30.790651   55682 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2835616235 -xf /var/lib/minikube/build/build.2835616235.tar
I1107 23:11:30.799539   55682 crio.go:297] Building image: /var/lib/minikube/build/build.2835616235
I1107 23:11:30.799598   55682 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-773400 /var/lib/minikube/build/build.2835616235 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1107 23:11:33.251636   55682 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-773400 /var/lib/minikube/build/build.2835616235 --cgroup-manager=cgroupfs: (2.452014026s)
I1107 23:11:33.251697   55682 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2835616235
I1107 23:11:33.260003   55682 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2835616235.tar
I1107 23:11:33.267938   55682 build_images.go:207] Built localhost/my-image:functional-773400 from /tmp/build.2835616235.tar
I1107 23:11:33.267969   55682 build_images.go:123] succeeded building to: functional-773400
I1107 23:11:33.267974   55682 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.949921835s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-773400
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image load --daemon gcr.io/google-containers/addon-resizer:functional-773400 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-773400 image load --daemon gcr.io/google-containers/addon-resizer:functional-773400 --alsologtostderr: (3.63825529s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image load --daemon gcr.io/google-containers/addon-resizer:functional-773400 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-773400 image load --daemon gcr.io/google-containers/addon-resizer:functional-773400 --alsologtostderr: (2.904565827s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.991301329s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-773400
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image load --daemon gcr.io/google-containers/addon-resizer:functional-773400 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-773400 image load --daemon gcr.io/google-containers/addon-resizer:functional-773400 --alsologtostderr: (6.107670073s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 service list -o json
functional_test.go:1493: Took "417.04868ms" to run "out/minikube-linux-amd64 -p functional-773400 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31684
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31684
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "296.391309ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "75.172956ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "309.743463ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "72.626196ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (16.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773400 /tmp/TestFunctionalparallelMountCmdany-port2209685997/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699398668299873050" to /tmp/TestFunctionalparallelMountCmdany-port2209685997/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699398668299873050" to /tmp/TestFunctionalparallelMountCmdany-port2209685997/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699398668299873050" to /tmp/TestFunctionalparallelMountCmdany-port2209685997/001/test-1699398668299873050
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773400 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (388.274042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  7 23:11 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  7 23:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  7 23:11 test-1699398668299873050
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh cat /mount-9p/test-1699398668299873050
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-773400 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a2863397-e21f-43f6-a8b5-94e81c132f5a] Pending
helpers_test.go:344: "busybox-mount" [a2863397-e21f-43f6-a8b5-94e81c132f5a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a2863397-e21f-43f6-a8b5-94e81c132f5a] Running
helpers_test.go:344: "busybox-mount" [a2863397-e21f-43f6-a8b5-94e81c132f5a] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a2863397-e21f-43f6-a8b5-94e81c132f5a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.009843141s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-773400 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773400 /tmp/TestFunctionalparallelMountCmdany-port2209685997/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (16.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image save gcr.io/google-containers/addon-resizer:functional-773400 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image rm gcr.io/google-containers/addon-resizer:functional-773400 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-773400
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 image save --daemon gcr.io/google-containers/addon-resizer:functional-773400 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-773400 image save --daemon gcr.io/google-containers/addon-resizer:functional-773400 --alsologtostderr: (1.020079246s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-773400
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773400 /tmp/TestFunctionalparallelMountCmdspecific-port3951622168/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773400 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.979404ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773400 /tmp/TestFunctionalparallelMountCmdspecific-port3951622168/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773400 ssh "sudo umount -f /mount-9p": exit status 1 (282.72589ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-773400 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773400 /tmp/TestFunctionalparallelMountCmdspecific-port3951622168/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2706462753/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2706462753/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2706462753/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773400 ssh "findmnt -T" /mount1: exit status 1 (421.838678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773400 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-773400 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2706462753/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2706462753/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2706462753/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-773400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-773400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-773400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 54738: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-773400 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-773400 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-773400 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b2ff905b-229c-4011-b2bc-4a1cc80faac4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b2ff905b-229c-4011-b2bc-4a1cc80faac4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.010570493s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-773400 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.127.240 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-773400 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.31s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-773400
--- PASS: TestFunctional/delete_addon-resizer_images (0.31s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-773400
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-773400
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (95.03s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-124713 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1107 23:11:59.147748   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-124713 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m35.033400067s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (95.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-124713 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-124713 addons enable ingress --alsologtostderr -v=5: (14.809105801s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-124713 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-775490 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1107 23:17:15.323738   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-775490 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (38.700023014s)
--- PASS: TestJSONOutput/start/Command (38.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-775490 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-775490 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-775490 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-775490 --output=json --user=testUser: (5.731883973s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-516801 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-516801 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.547773ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a448d2b9-33c5-47a2-905f-5c74f79dcd13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-516801] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6739316-c3a9-43d6-8c2f-d4b42aca3e2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17585"}}
	{"specversion":"1.0","id":"10742217-4b85-4b0d-a1ba-db911a4bcac8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8362574b-4e95-4805-ac31-e655b7d7f34e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig"}}
	{"specversion":"1.0","id":"cc098709-aaf4-403d-9926-dd8488fedc5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube"}}
	{"specversion":"1.0","id":"e019e3f7-55d8-49b5-9ed2-00dfdd41a1d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c83b4121-e345-49e8-8c5c-a4177edde1f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1d91f54a-77f4-4a4d-a124-9659cc3ad18a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-516801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-516801
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-984110 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-984110 --network=: (39.728984223s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-984110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-984110
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-984110: (1.705870646s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.45s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-068902 --network=bridge
E1107 23:18:37.244680   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:18:38.611286   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:18:38.616567   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:18:38.626834   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:18:38.647147   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:18:38.687458   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:18:38.767840   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:18:38.928232   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:18:39.248588   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-068902 --network=bridge: (23.52477644s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-068902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-068902
E1107 23:18:39.889466   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:18:41.170360   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-068902: (1.969550352s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.51s)

                                                
                                    
x
+
TestKicExistingNetwork (26.88s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-883791 --network=existing-network
E1107 23:18:43.730632   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:18:48.851568   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:18:59.092793   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-883791 --network=existing-network: (24.782661994s)
helpers_test.go:175: Cleaning up "existing-network-883791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-883791
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-883791: (1.960886541s)
--- PASS: TestKicExistingNetwork (26.88s)

                                                
                                    
x
+
TestKicCustomSubnet (27.03s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-499299 --subnet=192.168.60.0/24
E1107 23:19:15.302812   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:19:19.573683   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-499299 --subnet=192.168.60.0/24: (24.978521837s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-499299 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-499299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-499299
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-499299: (2.031884363s)
--- PASS: TestKicCustomSubnet (27.03s)

                                                
                                    
x
+
TestKicStaticIP (28.82s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-318690 --static-ip=192.168.200.200
E1107 23:20:00.533914   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-318690 --static-ip=192.168.200.200: (26.583904434s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-318690 ip
helpers_test.go:175: Cleaning up "static-ip-318690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-318690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-318690: (2.098821872s)
--- PASS: TestKicStaticIP (28.82s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (51.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-068391 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-068391 --driver=docker  --container-runtime=crio: (22.012457794s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-070612 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-070612 --driver=docker  --container-runtime=crio: (23.934038444s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-068391
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-070612
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-070612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-070612
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-070612: (1.905055186s)
helpers_test.go:175: Cleaning up "first-068391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-068391
E1107 23:20:53.400943   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-068391: (2.195033991s)
--- PASS: TestMinikubeProfile (51.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-596467 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-596467 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.311375125s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-596467 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-609122 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-609122 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.748195386s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-596467 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-596467 --alsologtostderr -v=5: (1.623690255s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-609122
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-609122: (1.2066212s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-609122
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-609122: (6.358779452s)
--- PASS: TestMountStart/serial/RestartStopped (7.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (53.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-542158 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1107 23:21:22.454579   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-542158 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (53.340244372s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (53.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-542158 -- rollout status deployment/busybox: (3.64498086s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-7phrb -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-n8tmh -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-7phrb -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-n8tmh -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-7phrb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-542158 -- exec busybox-5bc68d56bd-n8tmh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-542158 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-542158 -v 3 --alsologtostderr: (47.634795964s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.22s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp testdata/cp-test.txt multinode-542158:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp multinode-542158:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1034292737/001/cp-test_multinode-542158.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp multinode-542158:/home/docker/cp-test.txt multinode-542158-m02:/home/docker/cp-test_multinode-542158_multinode-542158-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m02 "sudo cat /home/docker/cp-test_multinode-542158_multinode-542158-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp multinode-542158:/home/docker/cp-test.txt multinode-542158-m03:/home/docker/cp-test_multinode-542158_multinode-542158-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m03 "sudo cat /home/docker/cp-test_multinode-542158_multinode-542158-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp testdata/cp-test.txt multinode-542158-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp multinode-542158-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1034292737/001/cp-test_multinode-542158-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp multinode-542158-m02:/home/docker/cp-test.txt multinode-542158:/home/docker/cp-test_multinode-542158-m02_multinode-542158.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158 "sudo cat /home/docker/cp-test_multinode-542158-m02_multinode-542158.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp multinode-542158-m02:/home/docker/cp-test.txt multinode-542158-m03:/home/docker/cp-test_multinode-542158-m02_multinode-542158-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m03 "sudo cat /home/docker/cp-test_multinode-542158-m02_multinode-542158-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp testdata/cp-test.txt multinode-542158-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp multinode-542158-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1034292737/001/cp-test_multinode-542158-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp multinode-542158-m03:/home/docker/cp-test.txt multinode-542158:/home/docker/cp-test_multinode-542158-m03_multinode-542158.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158 "sudo cat /home/docker/cp-test_multinode-542158-m03_multinode-542158.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 cp multinode-542158-m03:/home/docker/cp-test.txt multinode-542158-m02:/home/docker/cp-test_multinode-542158-m03_multinode-542158-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 ssh -n multinode-542158-m02 "sudo cat /home/docker/cp-test_multinode-542158-m03_multinode-542158-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-542158 node stop m03: (1.216671037s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-542158 status: exit status 7 (463.205283ms)

                                                
                                                
-- stdout --
	multinode-542158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-542158-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-542158-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-542158 status --alsologtostderr: exit status 7 (468.015098ms)

                                                
                                                
-- stdout --
	multinode-542158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-542158-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-542158-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:23:23.797185  116478 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:23:23.797460  116478 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:23:23.797471  116478 out.go:309] Setting ErrFile to fd 2...
	I1107 23:23:23.797476  116478 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:23:23.797723  116478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	I1107 23:23:23.797940  116478 out.go:303] Setting JSON to false
	I1107 23:23:23.797973  116478 mustload.go:65] Loading cluster: multinode-542158
	I1107 23:23:23.798077  116478 notify.go:220] Checking for updates...
	I1107 23:23:23.798458  116478 config.go:182] Loaded profile config "multinode-542158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:23:23.798473  116478 status.go:255] checking status of multinode-542158 ...
	I1107 23:23:23.798899  116478 cli_runner.go:164] Run: docker container inspect multinode-542158 --format={{.State.Status}}
	I1107 23:23:23.816032  116478 status.go:330] multinode-542158 host status = "Running" (err=<nil>)
	I1107 23:23:23.816054  116478 host.go:66] Checking if "multinode-542158" exists ...
	I1107 23:23:23.816297  116478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-542158
	I1107 23:23:23.832821  116478 host.go:66] Checking if "multinode-542158" exists ...
	I1107 23:23:23.833105  116478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:23:23.833140  116478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158
	I1107 23:23:23.849833  116478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158/id_rsa Username:docker}
	I1107 23:23:23.933929  116478 ssh_runner.go:195] Run: systemctl --version
	I1107 23:23:23.937954  116478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:23:23.948648  116478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:23:24.005586  116478 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-11-07 23:23:23.996092046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:23:24.006176  116478 kubeconfig.go:92] found "multinode-542158" server: "https://192.168.58.2:8443"
	I1107 23:23:24.006201  116478 api_server.go:166] Checking apiserver status ...
	I1107 23:23:24.006241  116478 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1107 23:23:24.016416  116478 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup
	I1107 23:23:24.025079  116478 api_server.go:182] apiserver freezer: "10:freezer:/docker/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2/crio/crio-c687ec119abf5b3650611710e2e1394e62d18b1fe62bab195c45f30dc987374b"
	I1107 23:23:24.025143  116478 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7dbe1742d15cea182a0fd88dcbc3243670a0263d4b22f658045910b0a6942af2/crio/crio-c687ec119abf5b3650611710e2e1394e62d18b1fe62bab195c45f30dc987374b/freezer.state
	I1107 23:23:24.032786  116478 api_server.go:204] freezer state: "THAWED"
	I1107 23:23:24.032816  116478 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1107 23:23:24.039253  116478 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1107 23:23:24.039284  116478 status.go:421] multinode-542158 apiserver status = Running (err=<nil>)
	I1107 23:23:24.039293  116478 status.go:257] multinode-542158 status: &{Name:multinode-542158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:23:24.039309  116478 status.go:255] checking status of multinode-542158-m02 ...
	I1107 23:23:24.039531  116478 cli_runner.go:164] Run: docker container inspect multinode-542158-m02 --format={{.State.Status}}
	I1107 23:23:24.057817  116478 status.go:330] multinode-542158-m02 host status = "Running" (err=<nil>)
	I1107 23:23:24.057840  116478 host.go:66] Checking if "multinode-542158-m02" exists ...
	I1107 23:23:24.058076  116478 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-542158-m02
	I1107 23:23:24.075207  116478 host.go:66] Checking if "multinode-542158-m02" exists ...
	I1107 23:23:24.075455  116478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 23:23:24.075487  116478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-542158-m02
	I1107 23:23:24.092501  116478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17585-9432/.minikube/machines/multinode-542158-m02/id_rsa Username:docker}
	I1107 23:23:24.176770  116478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 23:23:24.187357  116478 status.go:257] multinode-542158-m02 status: &{Name:multinode-542158-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:23:24.187386  116478 status.go:255] checking status of multinode-542158-m03 ...
	I1107 23:23:24.187618  116478 cli_runner.go:164] Run: docker container inspect multinode-542158-m03 --format={{.State.Status}}
	I1107 23:23:24.204767  116478 status.go:330] multinode-542158-m03 host status = "Stopped" (err=<nil>)
	I1107 23:23:24.204790  116478 status.go:343] host is not running, skipping remaining checks
	I1107 23:23:24.204795  116478 status.go:257] multinode-542158-m03 status: &{Name:multinode-542158-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-542158 node start m03 --alsologtostderr: (10.356814822s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (113.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-542158
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-542158
E1107 23:23:38.610690   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-542158: (24.863391275s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-542158 --wait=true -v=8 --alsologtostderr
E1107 23:24:06.294742   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:24:15.303569   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-542158 --wait=true -v=8 --alsologtostderr: (1m28.392577175s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-542158
--- PASS: TestMultiNode/serial/RestartKeepsNodes (113.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-542158 node delete m03: (4.112384541s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 stop
E1107 23:25:38.349354   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:25:53.401652   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-542158 stop: (23.681795345s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-542158 status: exit status 7 (103.223258ms)

                                                
                                                
-- stdout --
	multinode-542158
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-542158-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-542158 status --alsologtostderr: exit status 7 (94.973345ms)

                                                
                                                
-- stdout --
	multinode-542158
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-542158-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:25:57.149864  126589 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:25:57.150012  126589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:25:57.150025  126589 out.go:309] Setting ErrFile to fd 2...
	I1107 23:25:57.150032  126589 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:25:57.150218  126589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	I1107 23:25:57.150397  126589 out.go:303] Setting JSON to false
	I1107 23:25:57.150422  126589 mustload.go:65] Loading cluster: multinode-542158
	I1107 23:25:57.150529  126589 notify.go:220] Checking for updates...
	I1107 23:25:57.150792  126589 config.go:182] Loaded profile config "multinode-542158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:25:57.150813  126589 status.go:255] checking status of multinode-542158 ...
	I1107 23:25:57.151170  126589 cli_runner.go:164] Run: docker container inspect multinode-542158 --format={{.State.Status}}
	I1107 23:25:57.169200  126589 status.go:330] multinode-542158 host status = "Stopped" (err=<nil>)
	I1107 23:25:57.169228  126589 status.go:343] host is not running, skipping remaining checks
	I1107 23:25:57.169234  126589 status.go:257] multinode-542158 status: &{Name:multinode-542158 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1107 23:25:57.169256  126589 status.go:255] checking status of multinode-542158-m02 ...
	I1107 23:25:57.169503  126589 cli_runner.go:164] Run: docker container inspect multinode-542158-m02 --format={{.State.Status}}
	I1107 23:25:57.186993  126589 status.go:330] multinode-542158-m02 host status = "Stopped" (err=<nil>)
	I1107 23:25:57.187014  126589 status.go:343] host is not running, skipping remaining checks
	I1107 23:25:57.187020  126589 status.go:257] multinode-542158-m02 status: &{Name:multinode-542158-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (73.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-542158 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-542158 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m12.555329984s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-542158 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (73.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-542158
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-542158-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-542158-m02 --driver=docker  --container-runtime=crio: exit status 14 (79.028324ms)

                                                
                                                
-- stdout --
	* [multinode-542158-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-542158-m02' is duplicated with machine name 'multinode-542158-m02' in profile 'multinode-542158'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-542158-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-542158-m03 --driver=docker  --container-runtime=crio: (23.935974497s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-542158
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-542158: exit status 80 (275.63846ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-542158
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-542158-m03 already exists in multinode-542158-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-542158-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-542158-m03: (1.877175108s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.23s)

                                                
                                    
x
+
TestPreload (179.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-563701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1107 23:28:38.610557   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:29:15.302941   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-563701 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m51.923946814s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-563701 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-563701 image pull gcr.io/k8s-minikube/busybox: (2.376995452s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-563701
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-563701: (5.721503632s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-563701 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-563701 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (56.658047583s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-563701 image list
helpers_test.go:175: Cleaning up "test-preload-563701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-563701
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-563701: (2.258092475s)
--- PASS: TestPreload (179.16s)

                                                
                                    
x
+
TestScheduledStopUnix (96.58s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-306128 --memory=2048 --driver=docker  --container-runtime=crio
E1107 23:30:53.401141   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-306128 --memory=2048 --driver=docker  --container-runtime=crio: (21.211644715s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-306128 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-306128 -n scheduled-stop-306128
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-306128 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-306128 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-306128 -n scheduled-stop-306128
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-306128
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-306128 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-306128
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-306128: exit status 7 (79.389182ms)

                                                
                                                
-- stdout --
	scheduled-stop-306128
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-306128 -n scheduled-stop-306128
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-306128 -n scheduled-stop-306128: exit status 7 (77.047721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-306128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-306128
E1107 23:32:16.446513   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-306128: (3.946799003s)
--- PASS: TestScheduledStopUnix (96.58s)

                                                
                                    
x
+
TestInsufficientStorage (13.22s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-894020 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-894020 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.814397583s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c48396ce-77ef-441c-abac-6c3374d5fe57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-894020] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4232e102-0453-4459-bbfb-bac007c25ffb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17585"}}
	{"specversion":"1.0","id":"914d3bef-56f6-4773-9c58-85a963db3c8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"976f3982-8a63-49fc-b3d6-9c3b63b4b18b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig"}}
	{"specversion":"1.0","id":"c50e61e9-5577-4f0e-8f4b-3e6a3a725e1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube"}}
	{"specversion":"1.0","id":"22299e2c-504a-4515-849d-3e23c60065e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6447c802-2671-4f86-9a1e-1615d16f92e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7667394f-7815-4c64-b7dd-198ab7712ea5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"72522038-825f-4d47-a8fa-1037e90bae6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"efdc46ce-53ca-4168-b624-eca25db5dd1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"921385ba-85c3-45dd-b1fa-0adc5f03eacc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b10a0361-9d4a-4f1b-a004-9980d2992334","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-894020 in cluster insufficient-storage-894020","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1b83062-2b34-4b82-ba86-9660d986d532","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cea84e9b-9492-4d60-a11b-4691fbae99d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0701c49-1c63-454d-b561-2cea3113108c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-894020 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-894020 --output=json --layout=cluster: exit status 7 (275.204933ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-894020","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-894020","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 23:32:29.121383  148040 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-894020" does not appear in /home/jenkins/minikube-integration/17585-9432/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-894020 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-894020 --output=json --layout=cluster: exit status 7 (264.323601ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-894020","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-894020","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 23:32:29.386433  148126 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-894020" does not appear in /home/jenkins/minikube-integration/17585-9432/kubeconfig
	E1107 23:32:29.395810  148126 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/insufficient-storage-894020/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-894020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-894020
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-894020: (1.865823942s)
--- PASS: TestInsufficientStorage (13.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (385.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-958845 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1107 23:33:38.610684   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-958845 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.872003707s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-958845
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-958845: (4.547482079s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-958845 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-958845 status --format={{.Host}}: exit status 7 (81.066843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-958845 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-958845 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m33.053647044s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-958845 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-958845 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-958845 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (123.820071ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-958845] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-958845
	    minikube start -p kubernetes-upgrade-958845 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9588452 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-958845 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-958845 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-958845 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.711921682s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-958845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-958845
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-958845: (2.432002505s)
--- PASS: TestKubernetesUpgrade (385.89s)

                                                
                                    
x
+
TestMissingContainerUpgrade (159.08s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.2164108730.exe start -p missing-upgrade-809802 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Non-zero exit: /tmp/minikube-v1.9.0.2164108730.exe start -p missing-upgrade-809802 --memory=2200 --driver=docker  --container-runtime=crio: exit status 70 (1m7.452803586s)

                                                
                                                
-- stdout --
	* [missing-upgrade-809802] minikube v1.9.0 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (8 available), Memory=2200MB (32089MB available) ...
	* Preparing Kubernetes v1.18.0 on CRI-O 1.17.0 ...
	  - kubeadm.pod-network-cidr=10.244.0.0/16

                                                
                                                
-- /stdout --
** stderr ** 
	    > kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s* 
	X Failed to update cluster: updating node: downloading binaries: downloading kubelet: chmod +x /home/jenkins/minikube-integration/17585-9432/.minikube/cache/linux/v1.18.0/kubelet: chmod /home/jenkins/minikube-integration/17585-9432/.minikube/cache/linux/v1.18.0/kubelet.download: no such file or directory
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.2164108730.exe start -p missing-upgrade-809802 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.2164108730.exe start -p missing-upgrade-809802 --memory=2200 --driver=docker  --container-runtime=crio: (23.662200144s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-809802
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-809802
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-809802 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-809802 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m2.484267806s)
helpers_test.go:175: Cleaning up "missing-upgrade-809802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-809802
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-809802: (2.104505675s)
--- PASS: TestMissingContainerUpgrade (159.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-757130 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-757130 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (96.139805ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-757130] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-757130 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-757130 --driver=docker  --container-runtime=crio: (31.987558955s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-757130 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-422611 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-422611 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (433.922414ms)

                                                
                                                
-- stdout --
	* [false-422611] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17585
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 23:32:36.325718  150302 out.go:296] Setting OutFile to fd 1 ...
	I1107 23:32:36.325894  150302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:32:36.325904  150302 out.go:309] Setting ErrFile to fd 2...
	I1107 23:32:36.325912  150302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 23:32:36.326121  150302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17585-9432/.minikube/bin
	I1107 23:32:36.326801  150302 out.go:303] Setting JSON to false
	I1107 23:32:36.328161  150302 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4506,"bootTime":1699395450,"procs":529,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1107 23:32:36.328225  150302 start.go:138] virtualization: kvm guest
	I1107 23:32:36.396107  150302 out.go:177] * [false-422611] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1107 23:32:36.411057  150302 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 23:32:36.411019  150302 notify.go:220] Checking for updates...
	I1107 23:32:36.451334  150302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 23:32:36.474907  150302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17585-9432/kubeconfig
	I1107 23:32:36.477382  150302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17585-9432/.minikube
	I1107 23:32:36.479875  150302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1107 23:32:36.482404  150302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 23:32:36.486391  150302 config.go:182] Loaded profile config "NoKubernetes-757130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:32:36.486492  150302 config.go:182] Loaded profile config "force-systemd-env-846436": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:32:36.486571  150302 config.go:182] Loaded profile config "offline-crio-686073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1107 23:32:36.486779  150302 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 23:32:36.510171  150302 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1107 23:32:36.510305  150302 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 23:32:36.567554  150302 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:59 SystemTime:2023-11-07 23:32:36.558440911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1046-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1107 23:32:36.567688  150302 docker.go:295] overlay module found
	I1107 23:32:36.649240  150302 out.go:177] * Using the docker driver based on user configuration
	I1107 23:32:36.681237  150302 start.go:298] selected driver: docker
	I1107 23:32:36.681280  150302 start.go:902] validating driver "docker" against <nil>
	I1107 23:32:36.681293  150302 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 23:32:36.688613  150302 out.go:177] 
	W1107 23:32:36.691013  150302 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1107 23:32:36.693280  150302 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-422611 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-422611" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-422611" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-422611

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-422611"

                                                
                                                
----------------------- debugLogs end: false-422611 [took: 7.395878677s] --------------------------------
helpers_test.go:175: Cleaning up "false-422611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-422611
--- PASS: TestNetworkPlugins/group/false (8.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-757130 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-757130 --no-kubernetes --driver=docker  --container-runtime=crio: (6.046199628s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-757130 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-757130 status -o json: exit status 2 (304.987293ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-757130","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-757130
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-757130: (1.956288191s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-757130 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-757130 --no-kubernetes --driver=docker  --container-runtime=crio: (11.092411562s)
--- PASS: TestNoKubernetes/serial/Start (11.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-757130 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-757130 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.290757ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-757130
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-757130: (1.309946582s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-757130 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-757130 --driver=docker  --container-runtime=crio: (7.958531445s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-757130 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-757130 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.85342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-951392
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                    
x
+
TestPause/serial/Start (42.47s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-456674 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-456674 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (42.470437676s)
--- PASS: TestPause/serial/Start (42.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (40.800529011s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-456674 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-456674 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.480268258s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (47.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-422611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-422611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lhdrk" [2e3befe3-97c6-4ee8-918b-f3d6f59226be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lhdrk" [2e3befe3-97c6-4ee8-918b-f3d6f59226be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.008801233s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-422611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m9.921253538s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.92s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-456674 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-456674 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-456674 --output=json --layout=cluster: exit status 2 (297.453706ms)

                                                
                                                
-- stdout --
	{"Name":"pause-456674","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-456674","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-456674 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-456674 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-456674 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-456674 --alsologtostderr -v=5: (2.748582473s)
--- PASS: TestPause/serial/DeletePaused (2.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-456674
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-456674: exit status 1 (16.115402ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-456674: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m10.215440028s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m2.592849334s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-plw8v" [2bb724d7-06be-407a-851a-9d192ea93f06] Running
E1107 23:38:38.610384   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.020792495s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-422611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-422611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lkd69" [ba5425ba-5f36-4caa-ac4f-635cc69d34d2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lkd69" [ba5425ba-5f36-4caa-ac4f-635cc69d34d2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.008869867s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zgkmd" [3e0e12bd-1b01-4941-9b96-6ddced7c8f77] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.021863254s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-422611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-422611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-422611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kd6qv" [e4526c52-da4c-43a5-8468-cbf0e190dff9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kd6qv" [e4526c52-da4c-43a5-8468-cbf0e190dff9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.011124396s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-422611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (35.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1107 23:39:15.302921   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (35.734953578s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (35.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m4.988229671s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-422611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-422611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ctls6" [ae6184b9-8369-4d73-88dc-3af48f5e71fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ctls6" [ae6184b9-8369-4d73-88dc-3af48f5e71fa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.009728038s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-422611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-422611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-422611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pz7xx" [15282712-2f4e-4289-89ce-6167701b3eb7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pz7xx" [15282712-2f4e-4289-89ce-6167701b3eb7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.009896197s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (32.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-422611 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-422611 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.226905095s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-422611 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-422611 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.156214097s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-422611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (32.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-422611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (42.845901721s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (139.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-382775 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-382775 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m19.366482231s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (139.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-l4jpn" [d814ca33-74d4-4064-a075-e2728d702980] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.022090088s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-422611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-422611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mpkmz" [9359ca83-6e2a-4922-b034-dd039f0f27e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mpkmz" [9359ca83-6e2a-4922-b034-dd039f0f27e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.010041082s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-422611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-422611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hk8s8" [c4e31d15-c989-4cc0-b3c2-a19299ccfaf7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hk8s8" [c4e31d15-c989-4cc0-b3c2-a19299ccfaf7] Running
E1107 23:40:53.401627   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.010364853s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-422611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-217244 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-217244 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m12.809174418s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (32.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-422611 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-422611 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.208278847s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-422611 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-422611 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.189430838s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-422611 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (32.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-500876 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-500876 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m10.786239683s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-422611 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-749499 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1107 23:41:56.119672   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:41:56.124928   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:41:56.135155   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:41:56.155459   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:41:56.196292   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:41:56.276643   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:41:56.436933   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:41:56.757823   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:41:57.398017   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:41:58.678629   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:42:01.239067   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:42:06.359521   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-749499 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m11.637709141s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-217244 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [833ec472-636d-46e2-8b92-2522b6050b32] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [833ec472-636d-46e2-8b92-2522b6050b32] Running
E1107 23:42:16.599879   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.016720593s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-217244 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-217244 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-217244 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-217244 --alsologtostderr -v=3
E1107 23:42:18.349520   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-217244 --alsologtostderr -v=3: (12.026939403s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-500876 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f2c6efce-bed2-4de9-8d40-dbbdc2c425a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f2c6efce-bed2-4de9-8d40-dbbdc2c425a4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.017419942s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-500876 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-382775 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [47ecc24f-d7c3-4564-88ac-1df19568dc5f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [47ecc24f-d7c3-4564-88ac-1df19568dc5f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.013992633s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-382775 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-217244 -n no-preload-217244
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-217244 -n no-preload-217244: exit status 7 (112.272188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-217244 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-382775 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-382775 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (340.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-217244 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-217244 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m40.583174359s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-217244 -n no-preload-217244
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (340.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-382775 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-382775 --alsologtostderr -v=3: (12.03623894s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-500876 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-500876 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-500876 --alsologtostderr -v=3
E1107 23:42:37.080313   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-500876 --alsologtostderr -v=3: (11.975880559s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-382775 -n old-k8s-version-382775
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-382775 -n old-k8s-version-382775: exit status 7 (98.023622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-382775 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (430.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-382775 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-382775 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m10.006678793s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-382775 -n old-k8s-version-382775
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (430.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-500876 -n embed-certs-500876
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-500876 -n embed-certs-500876: exit status 7 (78.416111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-500876 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (346.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-500876 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-500876 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m46.101664196s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-500876 -n embed-certs-500876
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (346.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-749499 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c076ccab-c944-4217-b8b9-d3090e1ca4f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c076ccab-c944-4217-b8b9-d3090e1ca4f0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.016233019s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-749499 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-749499 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-749499 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.063694033s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-749499 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-749499 --alsologtostderr -v=3
E1107 23:43:18.040642   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-749499 --alsologtostderr -v=3: (12.271866504s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-749499 -n default-k8s-diff-port-749499
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-749499 -n default-k8s-diff-port-749499: exit status 7 (80.328272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-749499 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (342.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-749499 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1107 23:43:34.372195   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:34.377490   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:34.387751   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:34.408048   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:34.448373   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:34.528721   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:34.689399   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:35.010427   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:35.651302   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:36.932360   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:38.611108   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
E1107 23:43:39.492590   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:44.612821   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:47.887083   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:43:47.892426   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:43:47.902707   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:43:47.923002   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:43:47.963299   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:43:48.043411   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:43:48.204438   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:43:48.524957   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:43:49.165955   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:43:50.447085   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:43:53.007312   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:43:54.853923   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:43:58.128249   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:44:08.368775   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:44:15.303665   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:44:15.334912   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:44:27.989558   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:27.994867   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:28.005033   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:28.025353   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:28.065697   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:28.146014   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:28.306520   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:28.627399   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:28.849851   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:44:29.267652   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:30.547912   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:33.108200   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:38.228679   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:39.960983   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:44:47.673768   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:44:47.679058   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:44:47.689343   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:44:47.709667   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:44:47.749973   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:44:47.830285   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:44:47.990440   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:44:48.311078   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:44:48.469884   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:44:48.951640   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:44:50.231827   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:44:52.792358   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:44:56.295060   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:44:57.913539   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:45:08.154011   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:45:08.951003   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:45:09.810196   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:45:28.634809   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:45:31.753553   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:31.758832   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:31.769140   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:31.789441   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:31.829765   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:31.910124   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:32.070568   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:32.391146   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:33.032047   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:34.312271   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:36.872977   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:41.993859   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:45.604746   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:45:45.610014   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:45:45.620317   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:45:45.640605   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:45:45.680908   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:45:45.761203   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:45:45.921872   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:45:46.242438   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:45:46.883375   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:45:48.163904   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:45:49.911488   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:45:50.724406   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:45:52.234932   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:45:53.400982   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
E1107 23:45:55.844967   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:46:06.086121   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:46:09.594999   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
E1107 23:46:12.715741   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:46:18.215813   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
E1107 23:46:26.566431   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:46:31.731342   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
E1107 23:46:53.676009   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
E1107 23:46:56.118707   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:47:07.527330   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
E1107 23:47:11.832315   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
E1107 23:47:23.801962   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/auto-422611/client.crt: no such file or directory
E1107 23:47:31.515708   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/enable-default-cni-422611/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-749499 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m42.153076039s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-749499 -n default-k8s-diff-port-749499
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (342.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ctgn6" [cda84a9d-3c29-48ba-bd83-ad3a69407e85] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1107 23:48:15.596936   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/flannel-422611/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ctgn6" [cda84a9d-3c29-48ba-bd83-ad3a69407e85] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.017096487s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ctgn6" [cda84a9d-3c29-48ba-bd83-ad3a69407e85] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01047266s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-217244 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-217244 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-217244 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-217244 -n no-preload-217244
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-217244 -n no-preload-217244: exit status 2 (339.217145ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-217244 -n no-preload-217244
E1107 23:48:29.448314   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/bridge-422611/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-217244 -n no-preload-217244: exit status 2 (354.034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-217244 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-217244 -n no-preload-217244
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-217244 -n no-preload-217244
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d2rtw" [86c860a5-2111-487c-9b3e-6dfd163f9e90] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1107 23:48:34.372074   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/kindnet-422611/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d2rtw" [86c860a5-2111-487c-9b3e-6dfd163f9e90] Running
E1107 23:48:47.887508   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.021764759s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-260817 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1107 23:48:38.610506   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/ingress-addon-legacy-124713/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-260817 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (40.306848989s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d2rtw" [86c860a5-2111-487c-9b3e-6dfd163f9e90] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010540302s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-500876 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-500876 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-500876 --alsologtostderr -v=1
E1107 23:48:56.447000   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/functional-773400/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-500876 -n embed-certs-500876
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-500876 -n embed-certs-500876: exit status 2 (351.892519ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-500876 -n embed-certs-500876
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-500876 -n embed-certs-500876: exit status 2 (339.739044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-500876 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-500876 -n embed-certs-500876
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-500876 -n embed-certs-500876
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-smsvm" [990d0c33-f87c-4612-9ee5-3a0121c61f79] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-smsvm" [990d0c33-f87c-4612-9ee5-3a0121c61f79] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.016225837s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-260817 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1107 23:49:15.303184   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/addons-890770/client.crt: no such file or directory
E1107 23:49:15.572160   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/calico-422611/client.crt: no such file or directory
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-260817 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-260817 --alsologtostderr -v=3: (1.905070195s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-smsvm" [990d0c33-f87c-4612-9ee5-3a0121c61f79] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009871934s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-749499 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-260817 -n newest-cni-260817
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-260817 -n newest-cni-260817: exit status 7 (89.681669ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-260817 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-260817 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-260817 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (25.243947571s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-260817 -n newest-cni-260817
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-749499 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-749499 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-749499 -n default-k8s-diff-port-749499
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-749499 -n default-k8s-diff-port-749499: exit status 2 (297.324763ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-749499 -n default-k8s-diff-port-749499
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-749499 -n default-k8s-diff-port-749499: exit status 2 (309.239122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-749499 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-749499 -n default-k8s-diff-port-749499
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-749499 -n default-k8s-diff-port-749499
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-260817 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-260817 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-260817 -n newest-cni-260817
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-260817 -n newest-cni-260817: exit status 2 (296.448127ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-260817 -n newest-cni-260817
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-260817 -n newest-cni-260817: exit status 2 (300.598479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-260817 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-260817 -n newest-cni-260817
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-260817 -n newest-cni-260817
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-266wg" [16fc1b74-63ce-433e-85cb-c4c08237bc29] Running
E1107 23:49:55.672672   16211 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17585-9432/.minikube/profiles/custom-flannel-422611/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014959183s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-266wg" [16fc1b74-63ce-433e-85cb-c4c08237bc29] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009587754s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-382775 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-382775 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-382775 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-382775 -n old-k8s-version-382775
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-382775 -n old-k8s-version-382775: exit status 2 (302.959726ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-382775 -n old-k8s-version-382775
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-382775 -n old-k8s-version-382775: exit status 2 (310.715434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-382775 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-382775 -n old-k8s-version-382775
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-382775 -n old-k8s-version-382775
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.72s)

                                                
                                    

Test skip (24/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-422611 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-422611" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-422611" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-422611

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-422611"

                                                
                                                
----------------------- debugLogs end: kubenet-422611 [took: 4.783868077s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-422611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-422611
--- SKIP: TestNetworkPlugins/group/kubenet (5.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-422611 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-422611" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-422611

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-422611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-422611"

                                                
                                                
----------------------- debugLogs end: cilium-422611 [took: 3.492800864s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-422611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-422611
--- SKIP: TestNetworkPlugins/group/cilium (3.68s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-423912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-423912
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard