Test Report: Docker_Linux_crio 16569

                    
                      852d9197a19e9ebea28af4d23e9565040e130819:2023-05-31:29511
                    
                

Test fail (7/302)

Order failed test Duration
25 TestAddons/parallel/Ingress 155.38
112 TestFunctional/parallel/ImageCommands/ImageBuild 8.43
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 183.6
202 TestMultiNode/serial/PingHostFrom2Pods 3.01
217 TestPreload 149.18
223 TestRunningBinaryUpgrade 75.73
231 TestStoppedBinaryUpgrade/Upgrade 99.97
x
+
TestAddons/parallel/Ingress (155.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-133126 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context addons-133126 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (2.722111543s)
addons_test.go:208: (dbg) Run:  kubectl --context addons-133126 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-133126 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a0491903-bdbe-40ef-a10a-4af0a075c0ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a0491903-bdbe-40ef-a10a-4af0a075c0ce] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.018192718s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-133126 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-133126 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.978951396s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-133126 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-133126 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-133126 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-133126 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-133126 addons disable ingress --alsologtostderr -v=1: (7.519165877s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-133126
helpers_test.go:235: (dbg) docker inspect addons-133126:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bef777d9462baa5aecaec91590420f1aaa844b165b3a4b61005f8108c4b76d5d",
	        "Created": "2023-05-31T18:44:11.870999845Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 15813,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T18:44:12.181308509Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f246fffc476e503eec088cb85bddb7b217288054dd7e1375d4f95eca27f4bce3",
	        "ResolvConfPath": "/var/lib/docker/containers/bef777d9462baa5aecaec91590420f1aaa844b165b3a4b61005f8108c4b76d5d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bef777d9462baa5aecaec91590420f1aaa844b165b3a4b61005f8108c4b76d5d/hostname",
	        "HostsPath": "/var/lib/docker/containers/bef777d9462baa5aecaec91590420f1aaa844b165b3a4b61005f8108c4b76d5d/hosts",
	        "LogPath": "/var/lib/docker/containers/bef777d9462baa5aecaec91590420f1aaa844b165b3a4b61005f8108c4b76d5d/bef777d9462baa5aecaec91590420f1aaa844b165b3a4b61005f8108c4b76d5d-json.log",
	        "Name": "/addons-133126",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-133126:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-133126",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b7ea2e0048e85f0dc0433b2f5bf2b8cd451e8d90b5bcb280d565b3017854485a-init/diff:/var/lib/docker/overlay2/ff5bbba96769eca5d0c1a4ffdb04787b9f84aae4dcd4bc9929a365a3d058b20f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7ea2e0048e85f0dc0433b2f5bf2b8cd451e8d90b5bcb280d565b3017854485a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7ea2e0048e85f0dc0433b2f5bf2b8cd451e8d90b5bcb280d565b3017854485a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7ea2e0048e85f0dc0433b2f5bf2b8cd451e8d90b5bcb280d565b3017854485a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-133126",
	                "Source": "/var/lib/docker/volumes/addons-133126/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-133126",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-133126",
	                "name.minikube.sigs.k8s.io": "addons-133126",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c6c1c6f20f877ea1d0d6dacf6a593f7bce5132808b8f9b7d1e1f2d8fec621477",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c6c1c6f20f87",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-133126": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bef777d9462b",
	                        "addons-133126"
	                    ],
	                    "NetworkID": "f7fa64cb42c3b81909864269dbfad9089f6d5520e54d7a2c453380baa55076a9",
	                    "EndpointID": "e929f318f2fcd30ff561855f89a6f3261dc3e12fae93ea7c9a743db51abcbfa6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-133126 -n addons-133126
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-133126 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-133126 logs -n 25: (1.169347505s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-937565   | jenkins | v1.30.1 | 31 May 23 18:43 UTC |                     |
	|         | -p download-only-937565        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-937565   | jenkins | v1.30.1 | 31 May 23 18:43 UTC |                     |
	|         | -p download-only-937565        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 31 May 23 18:43 UTC | 31 May 23 18:43 UTC |
	| delete  | -p download-only-937565        | download-only-937565   | jenkins | v1.30.1 | 31 May 23 18:43 UTC | 31 May 23 18:43 UTC |
	| delete  | -p download-only-937565        | download-only-937565   | jenkins | v1.30.1 | 31 May 23 18:43 UTC | 31 May 23 18:43 UTC |
	| start   | --download-only -p             | download-docker-546935 | jenkins | v1.30.1 | 31 May 23 18:43 UTC |                     |
	|         | download-docker-546935         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-546935      | download-docker-546935 | jenkins | v1.30.1 | 31 May 23 18:43 UTC | 31 May 23 18:43 UTC |
	| start   | --download-only -p             | binary-mirror-791091   | jenkins | v1.30.1 | 31 May 23 18:43 UTC |                     |
	|         | binary-mirror-791091           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41685         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-791091        | binary-mirror-791091   | jenkins | v1.30.1 | 31 May 23 18:43 UTC | 31 May 23 18:43 UTC |
	| start   | -p addons-133126               | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:43 UTC | 31 May 23 18:45 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	|         | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:45 UTC | 31 May 23 18:45 UTC |
	|         | -p addons-133126               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-133126 addons disable   | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:45 UTC | 31 May 23 18:45 UTC |
	|         | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| ip      | addons-133126 ip               | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:46 UTC | 31 May 23 18:46 UTC |
	| addons  | addons-133126 addons disable   | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:46 UTC | 31 May 23 18:46 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-133126 addons           | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:46 UTC | 31 May 23 18:46 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:46 UTC | 31 May 23 18:46 UTC |
	|         | addons-133126                  |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:46 UTC | 31 May 23 18:46 UTC |
	|         | addons-133126                  |                        |         |         |                     |                     |
	| ssh     | addons-133126 ssh curl -s      | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:46 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-133126 addons           | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:46 UTC | 31 May 23 18:46 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-133126 addons           | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:46 UTC | 31 May 23 18:46 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-133126 ip               | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:48 UTC | 31 May 23 18:48 UTC |
	| addons  | addons-133126 addons disable   | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:48 UTC | 31 May 23 18:48 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-133126 addons disable   | addons-133126          | jenkins | v1.30.1 | 31 May 23 18:48 UTC | 31 May 23 18:48 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 18:43:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:43:48.622014   15123 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:43:48.622137   15123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:43:48.622147   15123 out.go:309] Setting ErrFile to fd 2...
	I0531 18:43:48.622151   15123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:43:48.622266   15123 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	I0531 18:43:48.622887   15123 out.go:303] Setting JSON to false
	I0531 18:43:48.623770   15123 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1578,"bootTime":1685557051,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:43:48.623833   15123 start.go:137] virtualization: kvm guest
	I0531 18:43:48.626741   15123 out.go:177] * [addons-133126] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:43:48.628579   15123 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 18:43:48.628655   15123 notify.go:220] Checking for updates...
	I0531 18:43:48.630194   15123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:43:48.632826   15123 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 18:43:48.634542   15123 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	I0531 18:43:48.636407   15123 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:43:48.638278   15123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:43:48.640070   15123 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 18:43:48.660375   15123 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:43:48.660465   15123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:43:48.710800   15123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:39 SystemTime:2023-05-31 18:43:48.703000061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 18:43:48.710885   15123 docker.go:294] overlay module found
	I0531 18:43:48.713581   15123 out.go:177] * Using the docker driver based on user configuration
	I0531 18:43:48.715806   15123 start.go:297] selected driver: docker
	I0531 18:43:48.715819   15123 start.go:875] validating driver "docker" against <nil>
	I0531 18:43:48.715829   15123 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:43:48.716594   15123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:43:48.762395   15123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:39 SystemTime:2023-05-31 18:43:48.754519045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 18:43:48.762535   15123 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0531 18:43:48.762707   15123 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:43:48.765104   15123 out.go:177] * Using Docker driver with root privileges
	I0531 18:43:48.767138   15123 cni.go:84] Creating CNI manager for ""
	I0531 18:43:48.767153   15123 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:43:48.767160   15123 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 18:43:48.767173   15123 start_flags.go:319] config:
	{Name:addons-133126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-133126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:43:48.769155   15123 out.go:177] * Starting control plane node addons-133126 in cluster addons-133126
	I0531 18:43:48.770639   15123 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 18:43:48.772776   15123 out.go:177] * Pulling base image ...
	I0531 18:43:48.774571   15123 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:43:48.774601   15123 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4
	I0531 18:43:48.774607   15123 cache.go:57] Caching tarball of preloaded images
	I0531 18:43:48.774668   15123 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 18:43:48.774685   15123 preload.go:174] Found /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 18:43:48.774693   15123 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0531 18:43:48.775026   15123 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/config.json ...
	I0531 18:43:48.775047   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/config.json: {Name:mk0353a75e48007bda8c46daddb4f5cc6ee8681f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:43:48.789954   15123 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0531 18:43:48.790069   15123 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory
	I0531 18:43:48.790087   15123 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory, skipping pull
	I0531 18:43:48.790096   15123 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in cache, skipping pull
	I0531 18:43:48.790109   15123 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 as a tarball
	I0531 18:43:48.790119   15123 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 from local cache
	I0531 18:43:59.344541   15123 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 from cached tarball
	I0531 18:43:59.344579   15123 cache.go:195] Successfully downloaded all kic artifacts
	I0531 18:43:59.344612   15123 start.go:364] acquiring machines lock for addons-133126: {Name:mk7cfc186eff2a67142639b4e226c7cc5c9c66c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:43:59.344702   15123 start.go:368] acquired machines lock for "addons-133126" in 71.971µs
	I0531 18:43:59.344725   15123 start.go:93] Provisioning new machine with config: &{Name:addons-133126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-133126 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:43:59.344811   15123 start.go:125] createHost starting for "" (driver="docker")
	I0531 18:43:59.346805   15123 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0531 18:43:59.346991   15123 start.go:159] libmachine.API.Create for "addons-133126" (driver="docker")
	I0531 18:43:59.347024   15123 client.go:168] LocalClient.Create starting
	I0531 18:43:59.347135   15123 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem
	I0531 18:43:59.521934   15123 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem
	I0531 18:43:59.731067   15123 cli_runner.go:164] Run: docker network inspect addons-133126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 18:43:59.746274   15123 cli_runner.go:211] docker network inspect addons-133126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 18:43:59.746352   15123 network_create.go:281] running [docker network inspect addons-133126] to gather additional debugging logs...
	I0531 18:43:59.746372   15123 cli_runner.go:164] Run: docker network inspect addons-133126
	W0531 18:43:59.760608   15123 cli_runner.go:211] docker network inspect addons-133126 returned with exit code 1
	I0531 18:43:59.760635   15123 network_create.go:284] error running [docker network inspect addons-133126]: docker network inspect addons-133126: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-133126 not found
	I0531 18:43:59.760656   15123 network_create.go:286] output of [docker network inspect addons-133126]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-133126 not found
	
	** /stderr **
	I0531 18:43:59.760691   15123 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:43:59.774939   15123 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013ba7a0}
	I0531 18:43:59.774979   15123 network_create.go:123] attempt to create docker network addons-133126 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 18:43:59.775025   15123 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-133126 addons-133126
	I0531 18:43:59.825234   15123 network_create.go:107] docker network addons-133126 192.168.49.0/24 created
	I0531 18:43:59.825267   15123 kic.go:117] calculated static IP "192.168.49.2" for the "addons-133126" container
	I0531 18:43:59.825350   15123 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 18:43:59.840862   15123 cli_runner.go:164] Run: docker volume create addons-133126 --label name.minikube.sigs.k8s.io=addons-133126 --label created_by.minikube.sigs.k8s.io=true
	I0531 18:43:59.857059   15123 oci.go:103] Successfully created a docker volume addons-133126
	I0531 18:43:59.857123   15123 cli_runner.go:164] Run: docker run --rm --name addons-133126-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133126 --entrypoint /usr/bin/test -v addons-133126:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0531 18:44:06.920207   15123 cli_runner.go:217] Completed: docker run --rm --name addons-133126-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133126 --entrypoint /usr/bin/test -v addons-133126:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib: (7.06304178s)
	I0531 18:44:06.920237   15123 oci.go:107] Successfully prepared a docker volume addons-133126
	I0531 18:44:06.920268   15123 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:44:06.920290   15123 kic.go:190] Starting extracting preloaded images to volume ...
	I0531 18:44:06.920377   15123 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-133126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 18:44:11.807682   15123 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-133126:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.887266657s)
	I0531 18:44:11.807710   15123 kic.go:199] duration metric: took 4.887418 seconds to extract preloaded images to volume
	W0531 18:44:11.807850   15123 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 18:44:11.807962   15123 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 18:44:11.856861   15123 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-133126 --name addons-133126 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133126 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-133126 --network addons-133126 --ip 192.168.49.2 --volume addons-133126:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0531 18:44:12.190920   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Running}}
	I0531 18:44:12.207409   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:12.225591   15123 cli_runner.go:164] Run: docker exec addons-133126 stat /var/lib/dpkg/alternatives/iptables
	I0531 18:44:12.297775   15123 oci.go:144] the created container "addons-133126" has a running status.
	I0531 18:44:12.297811   15123 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa...
	I0531 18:44:12.612682   15123 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 18:44:12.632387   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:12.648918   15123 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 18:44:12.648943   15123 kic_runner.go:114] Args: [docker exec --privileged addons-133126 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 18:44:12.715984   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:12.734689   15123 machine.go:88] provisioning docker machine ...
	I0531 18:44:12.734720   15123 ubuntu.go:169] provisioning hostname "addons-133126"
	I0531 18:44:12.734780   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:12.752879   15123 main.go:141] libmachine: Using SSH client type: native
	I0531 18:44:12.753497   15123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0531 18:44:12.753517   15123 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-133126 && echo "addons-133126" | sudo tee /etc/hostname
	I0531 18:44:12.882577   15123 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-133126
	
	I0531 18:44:12.882660   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:12.899665   15123 main.go:141] libmachine: Using SSH client type: native
	I0531 18:44:12.900084   15123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0531 18:44:12.900104   15123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-133126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-133126/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-133126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:44:13.012070   15123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:44:13.012102   15123 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-7270/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-7270/.minikube}
	I0531 18:44:13.012125   15123 ubuntu.go:177] setting up certificates
	I0531 18:44:13.012135   15123 provision.go:83] configureAuth start
	I0531 18:44:13.012194   15123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133126
	I0531 18:44:13.028517   15123 provision.go:138] copyHostCerts
	I0531 18:44:13.028602   15123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem (1078 bytes)
	I0531 18:44:13.028734   15123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem (1123 bytes)
	I0531 18:44:13.028822   15123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem (1675 bytes)
	I0531 18:44:13.028882   15123 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem org=jenkins.addons-133126 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-133126]
	I0531 18:44:13.188393   15123 provision.go:172] copyRemoteCerts
	I0531 18:44:13.188445   15123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:44:13.188478   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:13.204000   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:13.288949   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:44:13.309311   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0531 18:44:13.329899   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:44:13.350302   15123 provision.go:86] duration metric: configureAuth took 338.154141ms
	I0531 18:44:13.350329   15123 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:44:13.350502   15123 config.go:182] Loaded profile config "addons-133126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:44:13.350596   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:13.366606   15123 main.go:141] libmachine: Using SSH client type: native
	I0531 18:44:13.367030   15123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0531 18:44:13.367048   15123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:44:13.561987   15123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:44:13.562017   15123 machine.go:91] provisioned docker machine in 827.305939ms
	I0531 18:44:13.562027   15123 client.go:171] LocalClient.Create took 14.214996362s
	I0531 18:44:13.562049   15123 start.go:167] duration metric: libmachine.API.Create for "addons-133126" took 14.215055421s
	I0531 18:44:13.562058   15123 start.go:300] post-start starting for "addons-133126" (driver="docker")
	I0531 18:44:13.562066   15123 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:44:13.562132   15123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:44:13.562175   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:13.578972   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:13.668478   15123 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:44:13.671332   15123 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:44:13.671368   15123 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:44:13.671385   15123 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:44:13.671392   15123 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 18:44:13.671402   15123 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/addons for local assets ...
	I0531 18:44:13.671460   15123 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/files for local assets ...
	I0531 18:44:13.671493   15123 start.go:303] post-start completed in 109.428884ms
	I0531 18:44:13.671834   15123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133126
	I0531 18:44:13.688569   15123 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/config.json ...
	I0531 18:44:13.688807   15123 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:44:13.688843   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:13.704060   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:13.784737   15123 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:44:13.788696   15123 start.go:128] duration metric: createHost completed in 14.443870578s
	I0531 18:44:13.788720   15123 start.go:83] releasing machines lock for "addons-133126", held for 14.444007315s
	I0531 18:44:13.788857   15123 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133126
	I0531 18:44:13.804501   15123 ssh_runner.go:195] Run: cat /version.json
	I0531 18:44:13.804554   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:13.804592   15123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:44:13.804658   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:13.821853   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:13.821867   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:14.017494   15123 ssh_runner.go:195] Run: systemctl --version
	I0531 18:44:14.021450   15123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:44:14.156269   15123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 18:44:14.160078   15123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:44:14.178467   15123 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 18:44:14.178535   15123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:44:14.205157   15123 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0531 18:44:14.205178   15123 start.go:481] detecting cgroup driver to use...
	I0531 18:44:14.205210   15123 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 18:44:14.205253   15123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:44:14.218123   15123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:44:14.227488   15123 docker.go:193] disabling cri-docker service (if available) ...
	I0531 18:44:14.227541   15123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:44:14.238872   15123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:44:14.250785   15123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:44:14.326145   15123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:44:14.404011   15123 docker.go:209] disabling docker service ...
	I0531 18:44:14.404065   15123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:44:14.420001   15123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:44:14.429789   15123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:44:14.499813   15123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:44:14.568849   15123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:44:14.578365   15123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:44:14.591547   15123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:44:14.591593   15123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:44:14.599747   15123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:44:14.599816   15123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:44:14.607618   15123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:44:14.615562   15123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:44:14.623506   15123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:44:14.631085   15123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:44:14.637956   15123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:44:14.644963   15123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:44:14.715408   15123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:44:14.811145   15123 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:44:14.811227   15123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:44:14.814236   15123 start.go:549] Will wait 60s for crictl version
	I0531 18:44:14.814280   15123 ssh_runner.go:195] Run: which crictl
	I0531 18:44:14.817102   15123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:44:14.848550   15123 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 18:44:14.848641   15123 ssh_runner.go:195] Run: crio --version
	I0531 18:44:14.881667   15123 ssh_runner.go:195] Run: crio --version
	I0531 18:44:14.915480   15123 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0531 18:44:14.917574   15123 cli_runner.go:164] Run: docker network inspect addons-133126 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:44:14.933014   15123 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:44:14.936128   15123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:44:14.945298   15123 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:44:14.945345   15123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:44:14.990473   15123 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 18:44:14.990493   15123 crio.go:415] Images already preloaded, skipping extraction
	I0531 18:44:14.990537   15123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:44:15.020710   15123 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 18:44:15.020731   15123 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:44:15.020781   15123 ssh_runner.go:195] Run: crio config
	I0531 18:44:15.059864   15123 cni.go:84] Creating CNI manager for ""
	I0531 18:44:15.059886   15123 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:44:15.059898   15123 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:44:15.059915   15123 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-133126 NodeName:addons-133126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 18:44:15.060052   15123 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-133126"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:44:15.060155   15123 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-133126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-133126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:44:15.060209   15123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0531 18:44:15.068532   15123 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:44:15.068584   15123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:44:15.076048   15123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0531 18:44:15.090815   15123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:44:15.105637   15123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0531 18:44:15.120314   15123 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:44:15.123294   15123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:44:15.132437   15123 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126 for IP: 192.168.49.2
	I0531 18:44:15.132463   15123 certs.go:190] acquiring lock for shared ca certs: {Name:mkbc42e9eaddef0752bd9f3cb948d1ed478bdf0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:15.132598   15123 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key
	I0531 18:44:15.271048   15123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt ...
	I0531 18:44:15.271077   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt: {Name:mkdb6c26fb9fde6b1f77e48f74f5d6b122767421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:15.271230   15123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key ...
	I0531 18:44:15.271240   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key: {Name:mke7f3632357127f82fffe854862a7a64646a633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:15.271306   15123 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key
	I0531 18:44:15.386883   15123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.crt ...
	I0531 18:44:15.386914   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.crt: {Name:mkcfd727709907c6773ecdcdec12e4bc4a99a66f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:15.387096   15123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key ...
	I0531 18:44:15.387110   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key: {Name:mk35ffa825c832d57721d4bb58ff04f4d695d16f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:15.387231   15123 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.key
	I0531 18:44:15.387246   15123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt with IP's: []
	I0531 18:44:15.478046   15123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt ...
	I0531 18:44:15.478074   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: {Name:mke8f1d9e4d8dd64ac1bc367c973055a5f009619 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:15.478257   15123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.key ...
	I0531 18:44:15.478271   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.key: {Name:mkfa076c9e2f08907cfc495dda1cfe690a3a18ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:15.478359   15123 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.key.dd3b5fb2
	I0531 18:44:15.478377   15123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 18:44:15.571509   15123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.crt.dd3b5fb2 ...
	I0531 18:44:15.571536   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.crt.dd3b5fb2: {Name:mkefd42cc36939fd1a992c8365faee4966e2dea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:15.571710   15123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.key.dd3b5fb2 ...
	I0531 18:44:15.571723   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.key.dd3b5fb2: {Name:mke17ae84a4b15042895d62e51e71a1f628d784d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:15.571814   15123 certs.go:337] copying /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.crt
	I0531 18:44:15.571879   15123 certs.go:341] copying /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.key
	I0531 18:44:15.571919   15123 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/proxy-client.key
	I0531 18:44:15.571934   15123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/proxy-client.crt with IP's: []
	I0531 18:44:15.794078   15123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/proxy-client.crt ...
	I0531 18:44:15.794112   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/proxy-client.crt: {Name:mk52fc6ac16e02e2292a85a89a09e34a8dace07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:15.794308   15123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/proxy-client.key ...
	I0531 18:44:15.794322   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/proxy-client.key: {Name:mke4b4eb5b26a1e6a3bf64d386ebd7854e4cea83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:15.794516   15123 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem (1679 bytes)
	I0531 18:44:15.794552   15123 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:44:15.794576   15123 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:44:15.794600   15123 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem (1675 bytes)
	I0531 18:44:15.795112   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:44:15.815610   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:44:15.837190   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:44:15.858468   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 18:44:15.878849   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:44:15.899783   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:44:15.920266   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:44:15.941006   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:44:15.961896   15123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:44:15.981536   15123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 18:44:15.996097   15123 ssh_runner.go:195] Run: openssl version
	I0531 18:44:16.000613   15123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:44:16.008122   15123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:44:16.010923   15123 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:44:16.010962   15123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:44:16.016852   15123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:44:16.024992   15123 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 18:44:16.027790   15123 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 18:44:16.027834   15123 kubeadm.go:404] StartCluster: {Name:addons-133126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-133126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:44:16.027956   15123 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 18:44:16.028001   15123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:44:16.059304   15123 cri.go:88] found id: ""
	I0531 18:44:16.059368   15123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:44:16.067140   15123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:44:16.074489   15123 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:44:16.074558   15123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:44:16.081842   15123 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:44:16.081889   15123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:44:16.157532   15123 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1035-gcp\n", err: exit status 1
	I0531 18:44:16.214862   15123 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 18:44:16.215126   15123 kubeadm.go:322] W0531 18:44:16.214606    1160 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 18:44:18.414795   15123 kubeadm.go:322] W0531 18:44:18.414237    1160 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 18:44:24.878724   15123 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0531 18:44:24.878809   15123 kubeadm.go:322] [preflight] Running pre-flight checks
	I0531 18:44:24.878924   15123 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0531 18:44:24.878993   15123 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1035-gcp
	I0531 18:44:24.879025   15123 kubeadm.go:322] OS: Linux
	I0531 18:44:24.879074   15123 kubeadm.go:322] CGROUPS_CPU: enabled
	I0531 18:44:24.879122   15123 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0531 18:44:24.879169   15123 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0531 18:44:24.879232   15123 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0531 18:44:24.879289   15123 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0531 18:44:24.879365   15123 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0531 18:44:24.879427   15123 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0531 18:44:24.879492   15123 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0531 18:44:24.879555   15123 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0531 18:44:24.879659   15123 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 18:44:24.879795   15123 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 18:44:24.879878   15123 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 18:44:24.879930   15123 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 18:44:24.882127   15123 out.go:204]   - Generating certificates and keys ...
	I0531 18:44:24.882232   15123 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0531 18:44:24.882324   15123 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0531 18:44:24.882420   15123 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 18:44:24.882506   15123 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0531 18:44:24.882595   15123 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0531 18:44:24.882668   15123 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0531 18:44:24.882740   15123 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0531 18:44:24.882861   15123 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-133126 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0531 18:44:24.882922   15123 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0531 18:44:24.883086   15123 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-133126 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0531 18:44:24.883179   15123 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 18:44:24.883257   15123 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 18:44:24.883326   15123 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0531 18:44:24.883390   15123 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 18:44:24.883454   15123 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 18:44:24.883540   15123 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 18:44:24.883625   15123 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 18:44:24.883700   15123 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 18:44:24.883839   15123 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 18:44:24.883960   15123 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 18:44:24.884008   15123 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0531 18:44:24.884090   15123 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 18:44:24.886245   15123 out.go:204]   - Booting up control plane ...
	I0531 18:44:24.886348   15123 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 18:44:24.886455   15123 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 18:44:24.886528   15123 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 18:44:24.886595   15123 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 18:44:24.886717   15123 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 18:44:24.886782   15123 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002458 seconds
	I0531 18:44:24.886907   15123 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 18:44:24.887061   15123 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 18:44:24.887135   15123 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0531 18:44:24.887373   15123 kubeadm.go:322] [mark-control-plane] Marking the node addons-133126 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0531 18:44:24.887459   15123 kubeadm.go:322] [bootstrap-token] Using token: u2oo31.01ioylo23dmnbj93
	I0531 18:44:24.889429   15123 out.go:204]   - Configuring RBAC rules ...
	I0531 18:44:24.889575   15123 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 18:44:24.889686   15123 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 18:44:24.889874   15123 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 18:44:24.890052   15123 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 18:44:24.890229   15123 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 18:44:24.890375   15123 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 18:44:24.890540   15123 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 18:44:24.890605   15123 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0531 18:44:24.890675   15123 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0531 18:44:24.890704   15123 kubeadm.go:322] 
	I0531 18:44:24.890810   15123 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0531 18:44:24.890825   15123 kubeadm.go:322] 
	I0531 18:44:24.890934   15123 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0531 18:44:24.890944   15123 kubeadm.go:322] 
	I0531 18:44:24.890980   15123 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0531 18:44:24.891081   15123 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 18:44:24.891162   15123 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 18:44:24.891175   15123 kubeadm.go:322] 
	I0531 18:44:24.891243   15123 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0531 18:44:24.891252   15123 kubeadm.go:322] 
	I0531 18:44:24.891312   15123 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0531 18:44:24.891321   15123 kubeadm.go:322] 
	I0531 18:44:24.891383   15123 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0531 18:44:24.891485   15123 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 18:44:24.891586   15123 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 18:44:24.891599   15123 kubeadm.go:322] 
	I0531 18:44:24.891698   15123 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0531 18:44:24.891835   15123 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0531 18:44:24.891854   15123 kubeadm.go:322] 
	I0531 18:44:24.891961   15123 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token u2oo31.01ioylo23dmnbj93 \
	I0531 18:44:24.892084   15123 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:762176d172e4c2e2979887de61c98a5df6783b1700b9b76d8140f24ee64a7564 \
	I0531 18:44:24.892122   15123 kubeadm.go:322] 	--control-plane 
	I0531 18:44:24.892134   15123 kubeadm.go:322] 
	I0531 18:44:24.892246   15123 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0531 18:44:24.892255   15123 kubeadm.go:322] 
	I0531 18:44:24.892379   15123 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token u2oo31.01ioylo23dmnbj93 \
	I0531 18:44:24.892527   15123 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:762176d172e4c2e2979887de61c98a5df6783b1700b9b76d8140f24ee64a7564 
	I0531 18:44:24.892548   15123 cni.go:84] Creating CNI manager for ""
	I0531 18:44:24.892555   15123 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:44:24.894533   15123 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:44:24.896530   15123 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:44:24.900011   15123 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0531 18:44:24.900025   15123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 18:44:24.916028   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:44:25.631387   15123 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:44:25.631480   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:25.631505   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140 minikube.k8s.io/name=addons-133126 minikube.k8s.io/updated_at=2023_05_31T18_44_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:25.638099   15123 ops.go:34] apiserver oom_adj: -16
	I0531 18:44:25.695708   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:26.294891   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:26.794412   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:27.295123   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:27.794926   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:28.294663   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:28.794507   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:29.294391   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:29.795387   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:30.294537   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:30.794572   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:31.295211   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:31.795279   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:32.294336   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:32.794350   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:33.295008   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:33.794332   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:34.294586   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:34.795342   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:35.294735   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:35.794845   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:36.294999   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:36.795221   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:37.295333   15123 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:44:37.369022   15123 kubeadm.go:1076] duration metric: took 11.737604042s to wait for elevateKubeSystemPrivileges.
	I0531 18:44:37.369056   15123 kubeadm.go:406] StartCluster complete in 21.34122551s
	I0531 18:44:37.369079   15123 settings.go:142] acquiring lock: {Name:mk168872ecacf1e04453fffdd7073a8caed6462b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:37.369195   15123 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 18:44:37.369640   15123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/kubeconfig: {Name:mk2e9ef864ed1e4aaf9a6e1bd97970840e57fe82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:44:37.369854   15123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:44:37.369918   15123 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0531 18:44:37.369996   15123 addons.go:66] Setting volumesnapshots=true in profile "addons-133126"
	I0531 18:44:37.370006   15123 config.go:182] Loaded profile config "addons-133126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:44:37.370016   15123 addons.go:228] Setting addon volumesnapshots=true in "addons-133126"
	I0531 18:44:37.370028   15123 addons.go:66] Setting default-storageclass=true in profile "addons-133126"
	I0531 18:44:37.370028   15123 addons.go:66] Setting ingress=true in profile "addons-133126"
	I0531 18:44:37.370051   15123 addons.go:66] Setting cloud-spanner=true in profile "addons-133126"
	I0531 18:44:37.370053   15123 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-133126"
	I0531 18:44:37.370063   15123 addons.go:228] Setting addon cloud-spanner=true in "addons-133126"
	I0531 18:44:37.370072   15123 addons.go:66] Setting ingress-dns=true in profile "addons-133126"
	I0531 18:44:37.370079   15123 addons.go:66] Setting storage-provisioner=true in profile "addons-133126"
	I0531 18:44:37.370084   15123 addons.go:66] Setting helm-tiller=true in profile "addons-133126"
	I0531 18:44:37.370093   15123 addons.go:228] Setting addon storage-provisioner=true in "addons-133126"
	I0531 18:44:37.370294   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.370358   15123 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-133126"
	I0531 18:44:37.370073   15123 addons.go:228] Setting addon ingress=true in "addons-133126"
	I0531 18:44:37.370430   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.370059   15123 addons.go:66] Setting metrics-server=true in profile "addons-133126"
	I0531 18:44:37.370096   15123 addons.go:228] Setting addon helm-tiller=true in "addons-133126"
	I0531 18:44:37.370555   15123 addons.go:228] Setting addon metrics-server=true in "addons-133126"
	I0531 18:44:37.370594   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.370603   15123 addons.go:228] Setting addon ingress-dns=true in "addons-133126"
	I0531 18:44:37.370074   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.370639   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.370704   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.371629   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.371789   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.371790   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.371857   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.371865   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.370394   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.372175   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.370079   15123 addons.go:66] Setting gcp-auth=true in profile "addons-133126"
	I0531 18:44:37.370087   15123 addons.go:66] Setting inspektor-gadget=true in profile "addons-133126"
	I0531 18:44:37.372385   15123 addons.go:228] Setting addon inspektor-gadget=true in "addons-133126"
	I0531 18:44:37.372432   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.372491   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.370527   15123 addons.go:66] Setting registry=true in profile "addons-133126"
	I0531 18:44:37.372743   15123 addons.go:228] Setting addon registry=true in "addons-133126"
	I0531 18:44:37.372782   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.372949   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.373263   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.370047   15123 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-133126"
	I0531 18:44:37.373290   15123 mustload.go:65] Loading cluster: addons-133126
	I0531 18:44:37.373716   15123 config.go:182] Loaded profile config "addons-133126": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:44:37.373820   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.373835   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.374067   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.374914   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.398332   15123 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0531 18:44:37.400643   15123 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.16.1
	I0531 18:44:37.402742   15123 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0531 18:44:37.402781   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0531 18:44:37.402843   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:37.400686   15123 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0531 18:44:37.402955   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0531 18:44:37.402989   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:37.400627   15123 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:44:37.405156   15123 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0531 18:44:37.410761   15123 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0531 18:44:37.410783   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0531 18:44:37.410840   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:37.405131   15123 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:44:37.410948   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:44:37.410985   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:37.430622   15123 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0531 18:44:37.432775   15123 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0531 18:44:37.432806   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0531 18:44:37.432881   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:37.436794   15123 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0531 18:44:37.439165   15123 out.go:177]   - Using image docker.io/registry:2.8.1
	I0531 18:44:37.441125   15123 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0531 18:44:37.443397   15123 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0531 18:44:37.441099   15123 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0531 18:44:37.458883   15123 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0531 18:44:37.458907   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0531 18:44:37.461432   15123 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0531 18:44:37.458959   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:37.463447   15123 addons.go:228] Setting addon default-storageclass=true in "addons-133126"
	I0531 18:44:37.466848   15123 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0531 18:44:37.466935   15123 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.5
	I0531 18:44:37.467816   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:37.469194   15123 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.7.0
	I0531 18:44:37.469239   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.471779   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:37.474767   15123 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0531 18:44:37.474791   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0531 18:44:37.477134   15123 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0531 18:44:37.472071   15123 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0531 18:44:37.474527   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:37.474849   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:37.478276   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:37.478785   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:37.481459   15123 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 18:44:37.483285   15123 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0531 18:44:37.483378   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 18:44:37.487359   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:37.487351   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:37.487512   15123 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0531 18:44:37.487529   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16145 bytes)
	I0531 18:44:37.489840   15123 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0531 18:44:37.487582   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:37.493787   15123 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0531 18:44:37.493868   15123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:44:37.495568   15123 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0531 18:44:37.497321   15123 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0531 18:44:37.497338   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0531 18:44:37.497397   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:37.502375   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:37.509536   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:37.509868   15123 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:44:37.509891   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:44:37.509943   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:37.515289   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:37.523435   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:37.524023   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:37.528876   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:37.529171   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:37.850506   15123 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0531 18:44:37.850536   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0531 18:44:37.853953   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:44:37.859341   15123 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0531 18:44:37.859368   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0531 18:44:37.861278   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0531 18:44:37.862058   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0531 18:44:37.862115   15123 addons.go:420] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0531 18:44:37.862130   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0531 18:44:37.944039   15123 addons.go:420] installing /etc/kubernetes/addons/registry-svc.yaml
	I0531 18:44:37.944063   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0531 18:44:37.950979   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:44:37.958922   15123 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-133126" context rescaled to 1 replicas
	I0531 18:44:37.959047   15123 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:44:37.962565   15123 out.go:177] * Verifying Kubernetes components...
	I0531 18:44:37.964607   15123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:44:38.043773   15123 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0531 18:44:38.043863   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0531 18:44:38.051185   15123 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0531 18:44:38.051267   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0531 18:44:38.061304   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0531 18:44:38.065296   15123 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 18:44:38.065365   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0531 18:44:38.065664   15123 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0531 18:44:38.065686   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0531 18:44:38.142364   15123 addons.go:420] installing /etc/kubernetes/addons/ig-role.yaml
	I0531 18:44:38.142458   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0531 18:44:38.147872   15123 addons.go:420] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0531 18:44:38.147945   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0531 18:44:38.160436   15123 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0531 18:44:38.160471   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0531 18:44:38.259666   15123 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 18:44:38.259750   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 18:44:38.263318   15123 addons.go:420] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0531 18:44:38.263345   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0531 18:44:38.355912   15123 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0531 18:44:38.355992   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0531 18:44:38.363400   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0531 18:44:38.443489   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0531 18:44:38.453203   15123 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:44:38.453229   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 18:44:38.462807   15123 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0531 18:44:38.462833   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0531 18:44:38.543276   15123 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0531 18:44:38.543365   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0531 18:44:38.546848   15123 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0531 18:44:38.546928   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0531 18:44:38.644605   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 18:44:38.761276   15123 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0531 18:44:38.761352   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0531 18:44:38.762064   15123 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0531 18:44:38.762084   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0531 18:44:38.852264   15123 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 18:44:38.852350   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0531 18:44:38.957781   15123 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0531 18:44:38.957810   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0531 18:44:39.061813   15123 addons.go:420] installing /etc/kubernetes/addons/ig-crd.yaml
	I0531 18:44:39.061849   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0531 18:44:39.160866   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 18:44:39.251091   15123 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0531 18:44:39.251157   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0531 18:44:39.361377   15123 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.865789091s)
	I0531 18:44:39.361413   15123 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0531 18:44:39.451803   15123 addons.go:420] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0531 18:44:39.451830   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0531 18:44:39.663673   15123 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0531 18:44:39.663751   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0531 18:44:39.743066   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0531 18:44:40.154544   15123 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0531 18:44:40.154624   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0531 18:44:40.545119   15123 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0531 18:44:40.545200   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0531 18:44:40.845628   15123 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0531 18:44:40.845707   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0531 18:44:41.143338   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0531 18:44:42.059209   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.20520476s)
	I0531 18:44:42.059316   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.198016388s)
	I0531 18:44:43.478662   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.616571598s)
	I0531 18:44:43.478691   15123 addons.go:464] Verifying addon ingress=true in "addons-133126"
	I0531 18:44:43.481323   15123 out.go:177] * Verifying ingress addon...
	I0531 18:44:43.478748   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.527737946s)
	I0531 18:44:43.478768   15123 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.514132298s)
	I0531 18:44:43.478810   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.417481306s)
	I0531 18:44:43.478832   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.115351057s)
	I0531 18:44:43.478868   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.035352302s)
	I0531 18:44:43.478927   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.8342325s)
	I0531 18:44:43.479031   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.318062175s)
	I0531 18:44:43.479095   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.735938942s)
	I0531 18:44:43.483581   15123 addons.go:464] Verifying addon metrics-server=true in "addons-133126"
	I0531 18:44:43.483618   15123 addons.go:464] Verifying addon registry=true in "addons-133126"
	W0531 18:44:43.483648   15123 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0531 18:44:43.483691   15123 retry.go:31] will retry after 284.201889ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0531 18:44:43.487161   15123 out.go:177] * Verifying registry addon...
	I0531 18:44:43.484457   15123 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0531 18:44:43.484457   15123 node_ready.go:35] waiting up to 6m0s for node "addons-133126" to be "Ready" ...
	I0531 18:44:43.491317   15123 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0531 18:44:43.544962   15123 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0531 18:44:43.544982   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:43.545360   15123 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0531 18:44:43.545391   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:43.768747   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 18:44:44.049054   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:44.049538   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:44.148380   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.004971025s)
	I0531 18:44:44.148434   15123 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-133126"
	I0531 18:44:44.150653   15123 out.go:177] * Verifying csi-hostpath-driver addon...
	I0531 18:44:44.153677   15123 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0531 18:44:44.158134   15123 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0531 18:44:44.158203   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:44.289444   15123 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0531 18:44:44.289509   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:44.305928   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:44.396717   15123 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0531 18:44:44.413058   15123 addons.go:228] Setting addon gcp-auth=true in "addons-133126"
	I0531 18:44:44.413113   15123 host.go:66] Checking if "addons-133126" exists ...
	I0531 18:44:44.413554   15123 cli_runner.go:164] Run: docker container inspect addons-133126 --format={{.State.Status}}
	I0531 18:44:44.433691   15123 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0531 18:44:44.433750   15123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133126
	I0531 18:44:44.452729   15123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/addons-133126/id_rsa Username:docker}
	I0531 18:44:44.548554   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:44.549608   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:44.662665   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:44.744847   15123 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794
	I0531 18:44:44.746979   15123 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0531 18:44:44.748947   15123 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0531 18:44:44.748964   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0531 18:44:44.766269   15123 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0531 18:44:44.766288   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0531 18:44:44.782037   15123 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0531 18:44:44.782058   15123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5474 bytes)
	I0531 18:44:44.797817   15123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0531 18:44:45.054179   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:45.054540   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:45.162951   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:45.555174   15123 node_ready.go:58] node "addons-133126" has status "Ready":"False"
	I0531 18:44:45.556661   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:45.556844   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:45.663800   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:46.049759   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:46.050320   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:46.163549   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:46.549927   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:46.550577   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:46.665027   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:47.061987   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:47.062486   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:47.154578   15123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.356701596s)
	I0531 18:44:47.155612   15123 addons.go:464] Verifying addon gcp-auth=true in "addons-133126"
	I0531 18:44:47.158000   15123 out.go:177] * Verifying gcp-auth addon...
	I0531 18:44:47.160941   15123 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0531 18:44:47.249931   15123 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0531 18:44:47.250012   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:47.254082   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:47.549714   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:47.552475   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:47.663488   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:47.753978   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:48.049995   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:48.052284   15123 node_ready.go:58] node "addons-133126" has status "Ready":"False"
	I0531 18:44:48.052978   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:48.164123   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:48.253878   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:48.551115   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:48.552012   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:48.663672   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:48.755253   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:49.050244   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:49.050732   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:49.162889   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:49.254048   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:49.548070   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:49.549189   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:49.663159   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:49.754061   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:50.049454   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:50.049672   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:50.162030   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:50.253689   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:50.547470   15123 node_ready.go:58] node "addons-133126" has status "Ready":"False"
	I0531 18:44:50.549157   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:50.549428   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:50.662697   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:50.753475   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:51.048804   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:51.048955   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:51.162428   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:51.253878   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:51.548588   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:51.549836   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:51.661938   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:51.753763   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:52.049422   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:52.049540   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:52.162305   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:52.254254   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:52.548233   15123 node_ready.go:58] node "addons-133126" has status "Ready":"False"
	I0531 18:44:52.548779   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:52.548890   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:52.663209   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:52.753867   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:53.049535   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:53.049842   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:53.163843   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:53.254330   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:53.549303   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:53.549481   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:53.663792   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:53.753557   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:54.048739   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:54.048954   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:54.162775   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:54.253684   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:54.548947   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:54.549330   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:54.662405   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:54.753341   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:55.048956   15123 node_ready.go:58] node "addons-133126" has status "Ready":"False"
	I0531 18:44:55.049358   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:55.049628   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:55.163478   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:55.254339   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:55.548755   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:55.548961   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:55.662290   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:55.754067   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:56.048546   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:56.048778   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:56.162052   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:56.254178   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:56.548738   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:56.549798   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:56.663348   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:56.753773   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:57.048337   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:57.049508   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:57.163331   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:57.254512   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:57.548474   15123 node_ready.go:58] node "addons-133126" has status "Ready":"False"
	I0531 18:44:57.549091   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:57.549214   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:57.663161   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:57.754101   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:58.048794   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:58.049281   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:58.162888   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:58.253667   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:58.549355   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:58.549429   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:58.662421   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:58.753324   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:59.048986   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:59.048996   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:59.162351   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:59.252900   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:44:59.548394   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:44:59.548939   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:44:59.662245   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:44:59.754085   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:00.048044   15123 node_ready.go:58] node "addons-133126" has status "Ready":"False"
	I0531 18:45:00.048568   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:00.048892   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:00.162372   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:00.253083   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:00.548702   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:00.548854   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:00.662348   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:00.753053   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:01.048345   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:01.049133   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:01.162449   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:01.253104   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:01.548682   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:01.548733   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:01.662416   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:01.752970   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:02.048107   15123 node_ready.go:58] node "addons-133126" has status "Ready":"False"
	I0531 18:45:02.048781   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:02.049374   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:02.162596   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:02.253210   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:02.548561   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:02.548775   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:02.662026   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:02.753593   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:03.049117   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:03.049260   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:03.162563   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:03.252957   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:03.548729   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:03.549302   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:03.662518   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:03.752937   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:04.048544   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:04.049262   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:04.162274   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:04.253930   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:04.547558   15123 node_ready.go:58] node "addons-133126" has status "Ready":"False"
	I0531 18:45:04.548232   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:04.548936   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:04.661620   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:04.753234   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:05.048687   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:05.048783   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:05.162484   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:05.253518   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:05.548630   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:05.548897   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:05.662405   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:05.753092   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:06.048880   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:06.049010   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:06.162947   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:06.253817   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:06.548980   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:06.549136   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:06.662519   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:06.753456   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:07.047438   15123 node_ready.go:58] node "addons-133126" has status "Ready":"False"
	I0531 18:45:07.048617   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:07.049235   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:07.162296   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:07.253739   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:07.548492   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:07.549145   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:07.662171   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:07.754052   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:08.048832   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:08.049161   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:08.162235   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:08.253885   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:08.548076   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:08.548946   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:08.662273   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:08.754168   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:09.048005   15123 node_ready.go:58] node "addons-133126" has status "Ready":"False"
	I0531 18:45:09.048601   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:09.048678   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:09.162404   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:09.253334   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:09.548566   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:09.548743   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:09.662144   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:09.753811   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:10.049247   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:10.049371   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:10.162814   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:10.253528   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:10.552852   15123 node_ready.go:49] node "addons-133126" has status "Ready":"True"
	I0531 18:45:10.552876   15123 node_ready.go:38] duration metric: took 27.062928484s waiting for node "addons-133126" to be "Ready" ...
	I0531 18:45:10.552887   15123 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:45:10.554118   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:10.554719   15123 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0531 18:45:10.554741   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:10.560400   15123 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-r4znh" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:10.663238   15123 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0531 18:45:10.663259   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:10.753552   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:11.050825   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:11.050947   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:11.164131   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:11.253891   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:11.551230   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:11.551648   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:11.662871   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:11.752838   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:12.064279   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:12.064402   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:12.070265   15123 pod_ready.go:92] pod "coredns-5d78c9869d-r4znh" in "kube-system" namespace has status "Ready":"True"
	I0531 18:45:12.070286   15123 pod_ready.go:81] duration metric: took 1.509868154s waiting for pod "coredns-5d78c9869d-r4znh" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.070310   15123 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-133126" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.074336   15123 pod_ready.go:92] pod "etcd-addons-133126" in "kube-system" namespace has status "Ready":"True"
	I0531 18:45:12.074351   15123 pod_ready.go:81] duration metric: took 4.034373ms waiting for pod "etcd-addons-133126" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.074365   15123 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-133126" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.078436   15123 pod_ready.go:92] pod "kube-apiserver-addons-133126" in "kube-system" namespace has status "Ready":"True"
	I0531 18:45:12.078451   15123 pod_ready.go:81] duration metric: took 4.079739ms waiting for pod "kube-apiserver-addons-133126" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.078459   15123 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-133126" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.082023   15123 pod_ready.go:92] pod "kube-controller-manager-addons-133126" in "kube-system" namespace has status "Ready":"True"
	I0531 18:45:12.082040   15123 pod_ready.go:81] duration metric: took 3.574074ms waiting for pod "kube-controller-manager-addons-133126" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.082053   15123 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lmwvl" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.149381   15123 pod_ready.go:92] pod "kube-proxy-lmwvl" in "kube-system" namespace has status "Ready":"True"
	I0531 18:45:12.149409   15123 pod_ready.go:81] duration metric: took 67.349974ms waiting for pod "kube-proxy-lmwvl" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.149418   15123 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-133126" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.163395   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:12.253534   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:12.549001   15123 pod_ready.go:92] pod "kube-scheduler-addons-133126" in "kube-system" namespace has status "Ready":"True"
	I0531 18:45:12.549029   15123 pod_ready.go:81] duration metric: took 399.603375ms waiting for pod "kube-scheduler-addons-133126" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.549044   15123 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-jhps2" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:12.550047   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:12.550721   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:12.664775   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:12.753282   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:13.049855   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:13.050232   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:13.163509   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:13.253524   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:13.549354   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:13.549375   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:13.664926   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:13.754625   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:14.050408   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:14.050928   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:14.163373   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:14.253944   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:14.549869   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:14.549984   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:14.664715   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:14.753766   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:14.955181   15123 pod_ready.go:102] pod "metrics-server-844d8db974-jhps2" in "kube-system" namespace has status "Ready":"False"
	I0531 18:45:15.049635   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:15.049740   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:15.163746   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:15.254262   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:15.550334   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:15.550376   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:15.745560   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:15.754265   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:16.050458   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:16.050560   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:16.167995   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:16.253606   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:16.454979   15123 pod_ready.go:92] pod "metrics-server-844d8db974-jhps2" in "kube-system" namespace has status "Ready":"True"
	I0531 18:45:16.455004   15123 pod_ready.go:81] duration metric: took 3.905952944s waiting for pod "metrics-server-844d8db974-jhps2" in "kube-system" namespace to be "Ready" ...
	I0531 18:45:16.455023   15123 pod_ready.go:38] duration metric: took 5.902118776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:45:16.455038   15123 api_server.go:52] waiting for apiserver process to appear ...
	I0531 18:45:16.455074   15123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:45:16.466713   15123 api_server.go:72] duration metric: took 38.50758739s to wait for apiserver process to appear ...
	I0531 18:45:16.466739   15123 api_server.go:88] waiting for apiserver healthz status ...
	I0531 18:45:16.466757   15123 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:45:16.471170   15123 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:45:16.472234   15123 api_server.go:141] control plane version: v1.27.2
	I0531 18:45:16.472253   15123 api_server.go:131] duration metric: took 5.508124ms to wait for apiserver health ...
	I0531 18:45:16.472260   15123 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:45:16.480567   15123 system_pods.go:59] 18 kube-system pods found
	I0531 18:45:16.480591   15123 system_pods.go:61] "coredns-5d78c9869d-r4znh" [d4211924-59a6-415b-a372-0ecfd18e13ed] Running
	I0531 18:45:16.480600   15123 system_pods.go:61] "csi-hostpath-attacher-0" [d53c36d5-12df-453e-8865-5ae18c1d4b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0531 18:45:16.480609   15123 system_pods.go:61] "csi-hostpath-resizer-0" [894124c9-b32a-433b-b87a-c8188c10f16c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0531 18:45:16.480616   15123 system_pods.go:61] "csi-hostpathplugin-tfh4d" [b11b566c-a9a1-4bb2-b15b-42c4c47db1e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0531 18:45:16.480630   15123 system_pods.go:61] "etcd-addons-133126" [c93f08fa-33ea-4a3d-bef8-2df76201d7e5] Running
	I0531 18:45:16.480636   15123 system_pods.go:61] "kindnet-7wqcx" [d7685a0e-5434-46f1-8af4-6bc2335beee6] Running
	I0531 18:45:16.480643   15123 system_pods.go:61] "kube-apiserver-addons-133126" [557db628-5259-4fad-98b3-d30c6ae613c2] Running
	I0531 18:45:16.480647   15123 system_pods.go:61] "kube-controller-manager-addons-133126" [c60b7c8e-307e-4c81-a869-d16d429e21be] Running
	I0531 18:45:16.480656   15123 system_pods.go:61] "kube-ingress-dns-minikube" [d47140c0-a5ee-4d38-ae53-c0b931442ca1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0531 18:45:16.480663   15123 system_pods.go:61] "kube-proxy-lmwvl" [f583635c-f84f-491d-bb28-cb88b50371b1] Running
	I0531 18:45:16.480668   15123 system_pods.go:61] "kube-scheduler-addons-133126" [07f11104-d7eb-4fa1-9259-5bc9fc76d16b] Running
	I0531 18:45:16.480674   15123 system_pods.go:61] "metrics-server-844d8db974-jhps2" [9bdea332-ca3f-4804-ad93-d18dd0d6ad06] Running
	I0531 18:45:16.480679   15123 system_pods.go:61] "registry-fffjl" [dba6dd7a-325e-4f40-ae58-f472a48ce54b] Running
	I0531 18:45:16.480690   15123 system_pods.go:61] "registry-proxy-dvxpf" [e83e8386-6995-4287-9ba6-92204936ebea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0531 18:45:16.480697   15123 system_pods.go:61] "snapshot-controller-75bbb956b9-26fx6" [45b25844-4094-4f80-9a84-fa24eff68071] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:45:16.480706   15123 system_pods.go:61] "snapshot-controller-75bbb956b9-h85pw" [84d2d2cd-8247-41e0-b027-40a7ad199b31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:45:16.480710   15123 system_pods.go:61] "storage-provisioner" [3beaf352-0082-4515-9a8d-0a705f07e844] Running
	I0531 18:45:16.480717   15123 system_pods.go:61] "tiller-deploy-6847666dc-56gj2" [d921de98-18b8-4777-9691-8873f4f7dd02] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0531 18:45:16.480722   15123 system_pods.go:74] duration metric: took 8.458353ms to wait for pod list to return data ...
	I0531 18:45:16.480731   15123 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:45:16.482626   15123 default_sa.go:45] found service account: "default"
	I0531 18:45:16.482642   15123 default_sa.go:55] duration metric: took 1.905422ms for default service account to be created ...
	I0531 18:45:16.482648   15123 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 18:45:16.491317   15123 system_pods.go:86] 18 kube-system pods found
	I0531 18:45:16.491352   15123 system_pods.go:89] "coredns-5d78c9869d-r4znh" [d4211924-59a6-415b-a372-0ecfd18e13ed] Running
	I0531 18:45:16.491368   15123 system_pods.go:89] "csi-hostpath-attacher-0" [d53c36d5-12df-453e-8865-5ae18c1d4b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0531 18:45:16.491380   15123 system_pods.go:89] "csi-hostpath-resizer-0" [894124c9-b32a-433b-b87a-c8188c10f16c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0531 18:45:16.491394   15123 system_pods.go:89] "csi-hostpathplugin-tfh4d" [b11b566c-a9a1-4bb2-b15b-42c4c47db1e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0531 18:45:16.491408   15123 system_pods.go:89] "etcd-addons-133126" [c93f08fa-33ea-4a3d-bef8-2df76201d7e5] Running
	I0531 18:45:16.491416   15123 system_pods.go:89] "kindnet-7wqcx" [d7685a0e-5434-46f1-8af4-6bc2335beee6] Running
	I0531 18:45:16.491425   15123 system_pods.go:89] "kube-apiserver-addons-133126" [557db628-5259-4fad-98b3-d30c6ae613c2] Running
	I0531 18:45:16.491436   15123 system_pods.go:89] "kube-controller-manager-addons-133126" [c60b7c8e-307e-4c81-a869-d16d429e21be] Running
	I0531 18:45:16.491452   15123 system_pods.go:89] "kube-ingress-dns-minikube" [d47140c0-a5ee-4d38-ae53-c0b931442ca1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0531 18:45:16.491465   15123 system_pods.go:89] "kube-proxy-lmwvl" [f583635c-f84f-491d-bb28-cb88b50371b1] Running
	I0531 18:45:16.491474   15123 system_pods.go:89] "kube-scheduler-addons-133126" [07f11104-d7eb-4fa1-9259-5bc9fc76d16b] Running
	I0531 18:45:16.491482   15123 system_pods.go:89] "metrics-server-844d8db974-jhps2" [9bdea332-ca3f-4804-ad93-d18dd0d6ad06] Running
	I0531 18:45:16.491494   15123 system_pods.go:89] "registry-fffjl" [dba6dd7a-325e-4f40-ae58-f472a48ce54b] Running
	I0531 18:45:16.491507   15123 system_pods.go:89] "registry-proxy-dvxpf" [e83e8386-6995-4287-9ba6-92204936ebea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0531 18:45:16.491521   15123 system_pods.go:89] "snapshot-controller-75bbb956b9-26fx6" [45b25844-4094-4f80-9a84-fa24eff68071] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:45:16.491537   15123 system_pods.go:89] "snapshot-controller-75bbb956b9-h85pw" [84d2d2cd-8247-41e0-b027-40a7ad199b31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0531 18:45:16.491548   15123 system_pods.go:89] "storage-provisioner" [3beaf352-0082-4515-9a8d-0a705f07e844] Running
	I0531 18:45:16.491561   15123 system_pods.go:89] "tiller-deploy-6847666dc-56gj2" [d921de98-18b8-4777-9691-8873f4f7dd02] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0531 18:45:16.491574   15123 system_pods.go:126] duration metric: took 8.91953ms to wait for k8s-apps to be running ...
	I0531 18:45:16.491583   15123 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 18:45:16.491639   15123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:45:16.502874   15123 system_svc.go:56] duration metric: took 11.281686ms WaitForService to wait for kubelet.
	I0531 18:45:16.502902   15123 kubeadm.go:581] duration metric: took 38.543782527s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 18:45:16.502926   15123 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:45:16.549593   15123 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0531 18:45:16.549617   15123 node_conditions.go:123] node cpu capacity is 8
	I0531 18:45:16.549628   15123 node_conditions.go:105] duration metric: took 46.698012ms to run NodePressure ...
	I0531 18:45:16.549637   15123 start.go:228] waiting for startup goroutines ...
	I0531 18:45:16.550382   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:16.550412   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:16.663726   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:16.754055   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:17.050903   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:17.050951   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:17.163782   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:17.253994   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:17.549147   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:17.549576   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:17.663324   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:17.753521   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:18.049952   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:18.050300   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:18.163746   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:18.253832   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:18.550341   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:18.550583   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:18.663839   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:18.753800   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:19.049881   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:19.050073   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:19.163714   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:19.254646   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:19.549904   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:19.550177   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:19.664381   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:19.754107   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:20.050759   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:20.050886   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:20.163357   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:20.253072   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:20.549508   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:20.550314   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:20.667200   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:20.754239   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:21.049054   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:21.049267   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:21.163755   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:21.254818   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:21.550507   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:21.551822   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:21.663612   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:21.754387   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:22.049203   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:22.049310   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:22.165963   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:22.253627   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:22.549876   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:22.549969   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:22.663424   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:22.754305   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:23.049466   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:23.049612   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 18:45:23.163289   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:23.253638   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:23.549854   15123 kapi.go:107] duration metric: took 40.05853262s to wait for kubernetes.io/minikube-addons=registry ...
	I0531 18:45:23.549910   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:23.663777   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:23.753944   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:24.049599   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:24.163727   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:24.253844   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:24.549300   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:24.663296   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:24.753593   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:25.050633   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:25.164880   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:25.254689   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:25.553819   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:25.663808   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:25.753126   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:26.050169   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:26.163029   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:26.254060   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:26.549458   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:26.663401   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:26.753907   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:27.050371   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:27.197887   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:27.253587   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:27.581312   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:27.663589   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:27.754471   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:28.049533   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:28.211095   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:28.253865   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:28.549663   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:28.662733   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:28.753840   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:29.049629   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:29.164677   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:29.253408   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:29.549273   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:29.663706   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:29.754063   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:30.049464   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:30.163210   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:30.253938   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:30.556189   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:30.664811   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:30.755876   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:31.050058   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:31.164731   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:31.253433   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:31.550376   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:31.663311   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:31.753868   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:32.050251   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:32.164285   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:32.254522   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:32.549682   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:32.663475   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:32.754187   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:33.050163   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:33.163983   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:33.254330   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:33.556254   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:33.663889   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:33.753449   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:34.051194   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:34.163799   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:34.255372   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:34.551024   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:34.668600   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:34.753301   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:35.049688   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:35.164001   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:35.253684   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:35.550107   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:35.663848   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:35.754200   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:36.050162   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:36.163973   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:36.255391   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:36.550276   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:36.664627   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:36.754175   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:37.049803   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:37.164637   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:37.256111   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:37.550167   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:37.664326   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:37.753676   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:38.049254   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:38.164050   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:38.254387   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:38.549921   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:38.663204   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:38.753509   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:39.051839   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:39.164247   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:39.254591   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:39.550422   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:39.663649   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:39.753752   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:40.050005   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:40.164237   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:40.253409   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:40.550001   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:40.663021   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:40.753766   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:41.049257   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:41.163084   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:41.253673   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:41.549515   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:41.663911   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:41.754057   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:42.050346   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:42.162820   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 18:45:42.253107   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:42.550119   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:42.662923   15123 kapi.go:107] duration metric: took 58.509246517s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0531 18:45:42.753206   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:43.049578   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:43.253615   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:43.550977   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:43.753067   15123 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 18:45:44.049938   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:44.254058   15123 kapi.go:107] duration metric: took 57.093115744s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0531 18:45:44.256418   15123 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-133126 cluster.
	I0531 18:45:44.260652   15123 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0531 18:45:44.263413   15123 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0531 18:45:44.550931   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:45.050788   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:45.550562   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:46.050544   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:46.550023   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:47.049685   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:47.550241   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:48.049063   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:48.549628   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:49.096277   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:49.550410   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:50.050279   15123 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 18:45:50.549166   15123 kapi.go:107] duration metric: took 1m7.064706291s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0531 18:45:50.551625   15123 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, inspektor-gadget, helm-tiller, metrics-server, cloud-spanner, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0531 18:45:50.553420   15123 addons.go:499] enable addons completed in 1m13.183502869s: enabled=[storage-provisioner ingress-dns inspektor-gadget helm-tiller metrics-server cloud-spanner default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0531 18:45:50.553459   15123 start.go:233] waiting for cluster config update ...
	I0531 18:45:50.553480   15123 start.go:242] writing updated cluster config ...
	I0531 18:45:50.553716   15123 ssh_runner.go:195] Run: rm -f paused
	I0531 18:45:50.598826   15123 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0531 18:45:50.601248   15123 out.go:177] * Done! kubectl is now configured to use "addons-133126" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* May 31 18:48:25 addons-133126 crio[955]: time="2023-05-31 18:48:25.732143254Z" level=info msg="Removing container: 093285aa8ebc9b3cde33fd87acc043a3b0ad738712445ec9b0b175b252c32eb7" id=10db462d-f437-4095-96c6-accdf7c1777c name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:48:25 addons-133126 crio[955]: time="2023-05-31 18:48:25.749630604Z" level=info msg="Removed container 093285aa8ebc9b3cde33fd87acc043a3b0ad738712445ec9b0b175b252c32eb7: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=10db462d-f437-4095-96c6-accdf7c1777c name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:48:26 addons-133126 crio[955]: time="2023-05-31 18:48:26.284214958Z" level=info msg="Stopping container: b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb (timeout: 1s)" id=ae5724eb-e173-4172-a0e0-1b6562c63a32 name=/runtime.v1.RuntimeService/StopContainer
	May 31 18:48:26 addons-133126 crio[955]: time="2023-05-31 18:48:26.396456202Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea" id=ec47955c-9d14-473f-8312-4e2b374bb946 name=/runtime.v1.ImageService/PullImage
	May 31 18:48:26 addons-133126 crio[955]: time="2023-05-31 18:48:26.397235931Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=be72b1d4-0e41-4b45-9bf4-0eda095dc4d4 name=/runtime.v1.ImageService/ImageStatus
	May 31 18:48:26 addons-133126 crio[955]: time="2023-05-31 18:48:26.398098810Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=be72b1d4-0e41-4b45-9bf4-0eda095dc4d4 name=/runtime.v1.ImageService/ImageStatus
	May 31 18:48:26 addons-133126 crio[955]: time="2023-05-31 18:48:26.399076041Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-c4k7t/hello-world-app" id=ff56a050-b28d-4a3c-bcc9-06b334924dee name=/runtime.v1.RuntimeService/CreateContainer
	May 31 18:48:26 addons-133126 crio[955]: time="2023-05-31 18:48:26.399177182Z" level=warning msg="Allowed annotations are specified for workload []"
	May 31 18:48:26 addons-133126 crio[955]: time="2023-05-31 18:48:26.518753489Z" level=info msg="Created container 563e2f839bf7d6202987e29eafdce8cfc6d426e510a2a38e615026afa918e79f: default/hello-world-app-65bdb79f98-c4k7t/hello-world-app" id=ff56a050-b28d-4a3c-bcc9-06b334924dee name=/runtime.v1.RuntimeService/CreateContainer
	May 31 18:48:26 addons-133126 crio[955]: time="2023-05-31 18:48:26.519363930Z" level=info msg="Starting container: 563e2f839bf7d6202987e29eafdce8cfc6d426e510a2a38e615026afa918e79f" id=2728e1fe-54cb-4a9b-a552-489a70995806 name=/runtime.v1.RuntimeService/StartContainer
	May 31 18:48:26 addons-133126 crio[955]: time="2023-05-31 18:48:26.527386865Z" level=info msg="Started container" PID=9255 containerID=563e2f839bf7d6202987e29eafdce8cfc6d426e510a2a38e615026afa918e79f description=default/hello-world-app-65bdb79f98-c4k7t/hello-world-app id=2728e1fe-54cb-4a9b-a552-489a70995806 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc348a9effc8c7d8d794afda5d2cd01859ed74fe74a759b20a12bcc0d6db7237
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.354685499Z" level=warning msg="Stopping container b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=ae5724eb-e173-4172-a0e0-1b6562c63a32 name=/runtime.v1.RuntimeService/StopContainer
	May 31 18:48:27 addons-133126 conmon[5995]: conmon b66b90f4dd8277fd854e <ninfo>: container 6007 exited with status 137
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.499767688Z" level=info msg="Stopped container b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb: ingress-nginx/ingress-nginx-controller-858bcd4f57-2n5hb/controller" id=ae5724eb-e173-4172-a0e0-1b6562c63a32 name=/runtime.v1.RuntimeService/StopContainer
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.500319871Z" level=info msg="Stopping pod sandbox: b7a45c0db1540b7cf28c1af1e3071a7a660415f3404ca937650879fae51c3163" id=10bd1c89-869c-4272-aac8-bb44ebaf02a1 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.503154724Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-2HKWBTJYLHLPL2OW - [0:0]\n:KUBE-HP-PYKQEXIH2PL3JRFR - [0:0]\n-X KUBE-HP-2HKWBTJYLHLPL2OW\n-X KUBE-HP-PYKQEXIH2PL3JRFR\nCOMMIT\n"
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.504644629Z" level=info msg="Closing host port tcp:80"
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.504692929Z" level=info msg="Closing host port tcp:443"
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.506021865Z" level=info msg="Host port tcp:80 does not have an open socket"
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.506039594Z" level=info msg="Host port tcp:443 does not have an open socket"
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.506163520Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-858bcd4f57-2n5hb Namespace:ingress-nginx ID:b7a45c0db1540b7cf28c1af1e3071a7a660415f3404ca937650879fae51c3163 UID:32f89ceb-291f-4619-a2a4-869ebec8127e NetNS:/var/run/netns/6d8f27e6-4b55-4057-ac52-7b3d483e74ee Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.506272986Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-858bcd4f57-2n5hb from CNI network \"kindnet\" (type=ptp)"
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.545635764Z" level=info msg="Stopped pod sandbox: b7a45c0db1540b7cf28c1af1e3071a7a660415f3404ca937650879fae51c3163" id=10bd1c89-869c-4272-aac8-bb44ebaf02a1 name=/runtime.v1.RuntimeService/StopPodSandbox
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.740795765Z" level=info msg="Removing container: b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb" id=ddf1806b-b357-442f-8969-962a09210f7a name=/runtime.v1.RuntimeService/RemoveContainer
	May 31 18:48:27 addons-133126 crio[955]: time="2023-05-31 18:48:27.756421273Z" level=info msg="Removed container b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb: ingress-nginx/ingress-nginx-controller-858bcd4f57-2n5hb/controller" id=ddf1806b-b357-442f-8969-962a09210f7a name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	563e2f839bf7d       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      7 seconds ago       Running             hello-world-app           0                   fc348a9effc8c       hello-world-app-65bdb79f98-c4k7t
	09b39e75ef15b       docker.io/library/nginx@sha256:0b0af14a00ea0e4fd9b09e77d2b89b71b5c5a97f9aa073553f355415bc34ae33                              2 minutes ago       Running             nginx                     0                   b53a07e1831c3       nginx
	e8401c5608cbb       ghcr.io/headlamp-k8s/headlamp@sha256:553bbb3a9a8fa54877d672bd8362248bf63776b684817a7a9a2b39a69acd6846                        2 minutes ago       Running             headlamp                  0                   dca14d7f86410       headlamp-6b5756787-mvlwx
	861e71ef9ae84       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   7e1b1bc2792b3       gcp-auth-58478865f7-m6rcw
	b3f970a52c33d       5a86b03a88d2316e2317c2576449a95ddbd105d69b2fe7b01d667b0ebab37422                                                             2 minutes ago       Exited              patch                     2                   ad281ad65de78       ingress-nginx-admission-patch-ccm2k
	e4ab32c0e686e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:01d181618f270f2a96c04006f33b2699ad3ccb02da48d0f89b22abce084b292f   2 minutes ago       Exited              create                    0                   0cce8bf2b2f30       ingress-nginx-admission-create-kbgf2
	3d1f01fb157b1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   965a042a0c0b8       storage-provisioner
	63ba665730a7f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   3fd6548fcfb47       coredns-5d78c9869d-r4znh
	3b42f06962920       b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee                                                             3 minutes ago       Running             kube-proxy                0                   e2d8443d04e26       kube-proxy-lmwvl
	35cc5e0eb4aa8       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                                             3 minutes ago       Running             kindnet-cni               0                   60746df8b6888       kindnet-7wqcx
	ca9603416772c       ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12                                                             4 minutes ago       Running             kube-controller-manager   0                   c0531032edb3b       kube-controller-manager-addons-133126
	00bb6c1307fb9       c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370                                                             4 minutes ago       Running             kube-apiserver            0                   3a36524a10090       kube-apiserver-addons-133126
	cbaffbd05bc05       89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0                                                             4 minutes ago       Running             kube-scheduler            0                   984837fcb53f9       kube-scheduler-addons-133126
	d80098893fbfd       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   08d2954169d57       etcd-addons-133126
	
	* 
	* ==> coredns [63ba665730a7f7490cb00400d0a2525c009a8acead035c2ba33841a7ce8e8145] <==
	* [INFO] 10.244.0.16:47121 - 49493 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084912s
	[INFO] 10.244.0.16:45978 - 16653 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005791229s
	[INFO] 10.244.0.16:45978 - 32684 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.007715753s
	[INFO] 10.244.0.16:47708 - 24550 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005105644s
	[INFO] 10.244.0.16:47708 - 1483 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005394798s
	[INFO] 10.244.0.16:42200 - 43518 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006317917s
	[INFO] 10.244.0.16:42200 - 38718 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.008103584s
	[INFO] 10.244.0.16:45997 - 511 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000076774s
	[INFO] 10.244.0.16:45997 - 45466 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000097937s
	[INFO] 10.244.0.17:45299 - 42503 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000180818s
	[INFO] 10.244.0.17:35377 - 24261 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000241721s
	[INFO] 10.244.0.17:38792 - 21954 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108614s
	[INFO] 10.244.0.17:59785 - 6768 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159087s
	[INFO] 10.244.0.17:55463 - 64068 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093825s
	[INFO] 10.244.0.17:38783 - 9294 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000154229s
	[INFO] 10.244.0.17:33336 - 26092 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007623315s
	[INFO] 10.244.0.17:45066 - 35676 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.009870569s
	[INFO] 10.244.0.17:42847 - 58513 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.036399428s
	[INFO] 10.244.0.17:36223 - 46535 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.036536221s
	[INFO] 10.244.0.17:59323 - 14008 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005167824s
	[INFO] 10.244.0.17:46551 - 54520 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005257759s
	[INFO] 10.244.0.17:40543 - 53067 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000699361s
	[INFO] 10.244.0.17:55383 - 52132 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000714293s
	[INFO] 10.244.0.21:57289 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000171911s
	[INFO] 10.244.0.21:42768 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122487s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-133126
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-133126
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140
	                    minikube.k8s.io/name=addons-133126
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_31T18_44_25_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-133126
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 18:44:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-133126
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 18:48:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 18:46:27 +0000   Wed, 31 May 2023 18:44:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 18:46:27 +0000   Wed, 31 May 2023 18:44:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 18:46:27 +0000   Wed, 31 May 2023 18:44:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 18:46:27 +0000   Wed, 31 May 2023 18:45:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-133126
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	System Info:
	  Machine ID:                 ccd75f2c1085432ca34d1bd9b43a5c8c
	  System UUID:                241b9610-f2e7-419a-bab0-77e42cc829a0
	  Boot ID:                    858e553b-6392-44c5-a611-8f56a2b0fab6
	  Kernel Version:             5.15.0-1035-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-c4k7t         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-58478865f7-m6rcw                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  headlamp                    headlamp-6b5756787-mvlwx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 coredns-5d78c9869d-r4znh                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m57s
	  kube-system                 etcd-addons-133126                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-7wqcx                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-addons-133126             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-addons-133126    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-lmwvl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-addons-133126             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  Starting                 4m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m16s (x8 over 4m16s)  kubelet          Node addons-133126 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s (x8 over 4m16s)  kubelet          Node addons-133126 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s (x8 over 4m16s)  kubelet          Node addons-133126 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s                  kubelet          Node addons-133126 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s                  kubelet          Node addons-133126 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s                  kubelet          Node addons-133126 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m58s                  node-controller  Node addons-133126 event: Registered Node addons-133126 in Controller
	  Normal  NodeReady                3m24s                  kubelet          Node addons-133126 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.009757] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004410] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.006288] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000798] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000704] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000681] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000792] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000795] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000751] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000788] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.428333] kauditd_printk_skb: 34 callbacks suppressed
	[May31 18:46] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 26 cb 5e 90 28 94 c2 06 0b a9 45 57 08 00
	[  +1.012561] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 cb 5e 90 28 94 c2 06 0b a9 45 57 08 00
	[  +2.015870] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 cb 5e 90 28 94 c2 06 0b a9 45 57 08 00
	[  +4.251684] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 26 cb 5e 90 28 94 c2 06 0b a9 45 57 08 00
	[  +8.191393] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 26 cb 5e 90 28 94 c2 06 0b a9 45 57 08 00
	[ +16.126796] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 cb 5e 90 28 94 c2 06 0b a9 45 57 08 00
	[May31 18:47] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 26 cb 5e 90 28 94 c2 06 0b a9 45 57 08 00
	
	* 
	* ==> etcd [d80098893fbfdd367d55bfa8cdec2ebaa2064e2630bfef7eda05818b57d1b6f2] <==
	* {"level":"info","ts":"2023-05-31T18:44:20.153Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-133126 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-31T18:44:20.153Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T18:44:20.153Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T18:44:20.153Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T18:44:20.153Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-31T18:44:20.153Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-31T18:44:20.154Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T18:44:20.154Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T18:44:20.154Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T18:44:20.155Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-05-31T18:44:20.155Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-31T18:44:40.543Z","caller":"traceutil/trace.go:171","msg":"trace[1039857048] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"280.968426ms","start":"2023-05-31T18:44:40.262Z","end":"2023-05-31T18:44:40.543Z","steps":["trace[1039857048] 'process raft request'  (duration: 190.299599ms)","trace[1039857048] 'compare'  (duration: 90.555036ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-31T18:44:40.545Z","caller":"traceutil/trace.go:171","msg":"trace[1782063644] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"183.071271ms","start":"2023-05-31T18:44:40.362Z","end":"2023-05-31T18:44:40.545Z","steps":["trace[1782063644] 'process raft request'  (duration: 182.679315ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:45:49.093Z","caller":"traceutil/trace.go:171","msg":"trace[1948026207] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"103.314929ms","start":"2023-05-31T18:45:48.989Z","end":"2023-05-31T18:45:49.093Z","steps":["trace[1948026207] 'process raft request'  (duration: 42.596973ms)","trace[1948026207] 'compare'  (duration: 60.595733ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-31T18:46:00.479Z","caller":"traceutil/trace.go:171","msg":"trace[1343764125] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"107.740226ms","start":"2023-05-31T18:46:00.371Z","end":"2023-05-31T18:46:00.479Z","steps":["trace[1343764125] 'process raft request'  (duration: 107.515262ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:46:00.591Z","caller":"traceutil/trace.go:171","msg":"trace[1264830076] transaction","detail":"{read_only:false; response_revision:1182; number_of_response:1; }","duration":"102.863594ms","start":"2023-05-31T18:46:00.488Z","end":"2023-05-31T18:46:00.591Z","steps":["trace[1264830076] 'process raft request'  (duration: 102.081711ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:46:22.519Z","caller":"traceutil/trace.go:171","msg":"trace[669525508] transaction","detail":"{read_only:false; response_revision:1364; number_of_response:1; }","duration":"106.5897ms","start":"2023-05-31T18:46:22.413Z","end":"2023-05-31T18:46:22.519Z","steps":["trace[669525508] 'process raft request'  (duration: 54.346874ms)","trace[669525508] 'compare'  (duration: 52.05423ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-31T18:46:22.688Z","caller":"traceutil/trace.go:171","msg":"trace[1073696125] transaction","detail":"{read_only:false; response_revision:1366; number_of_response:1; }","duration":"155.605192ms","start":"2023-05-31T18:46:22.532Z","end":"2023-05-31T18:46:22.688Z","steps":["trace[1073696125] 'process raft request'  (duration: 115.967366ms)","trace[1073696125] 'compare'  (duration: 39.477186ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-31T18:46:22.688Z","caller":"traceutil/trace.go:171","msg":"trace[1470604150] transaction","detail":"{read_only:false; response_revision:1367; number_of_response:1; }","duration":"143.836211ms","start":"2023-05-31T18:46:22.544Z","end":"2023-05-31T18:46:22.688Z","steps":["trace[1470604150] 'process raft request'  (duration: 143.781873ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:46:22.688Z","caller":"traceutil/trace.go:171","msg":"trace[883498936] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1366; }","duration":"151.918308ms","start":"2023-05-31T18:46:22.536Z","end":"2023-05-31T18:46:22.688Z","steps":["trace[883498936] 'process raft request'  (duration: 151.791927ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:46:22.893Z","caller":"traceutil/trace.go:171","msg":"trace[887405126] linearizableReadLoop","detail":"{readStateIndex:1420; appliedIndex:1419; }","duration":"113.635618ms","start":"2023-05-31T18:46:22.780Z","end":"2023-05-31T18:46:22.893Z","steps":["trace[887405126] 'read index received'  (duration: 62.736342ms)","trace[887405126] 'applied index is now lower than readState.Index'  (duration: 50.898476ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-31T18:46:22.893Z","caller":"traceutil/trace.go:171","msg":"trace[1503090655] transaction","detail":"{read_only:false; response_revision:1370; number_of_response:1; }","duration":"115.3104ms","start":"2023-05-31T18:46:22.778Z","end":"2023-05-31T18:46:22.893Z","steps":["trace[1503090655] 'process raft request'  (duration: 64.379908ms)","trace[1503090655] 'compare'  (duration: 50.824862ms)"],"step_count":2}
	{"level":"warn","ts":"2023-05-31T18:46:22.894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.769365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo\" ","response":"range_response_count:1 size:1602"}
	{"level":"info","ts":"2023-05-31T18:46:22.894Z","caller":"traceutil/trace.go:171","msg":"trace[463386408] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo; range_end:; response_count:1; response_revision:1370; }","duration":"113.90871ms","start":"2023-05-31T18:46:22.780Z","end":"2023-05-31T18:46:22.894Z","steps":["trace[463386408] 'agreement among raft nodes before linearized reading'  (duration: 113.702935ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-31T18:46:28.668Z","caller":"traceutil/trace.go:171","msg":"trace[200007850] transaction","detail":"{read_only:false; response_revision:1415; number_of_response:1; }","duration":"140.158054ms","start":"2023-05-31T18:46:28.528Z","end":"2023-05-31T18:46:28.668Z","steps":["trace[200007850] 'process raft request'  (duration: 139.995312ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [861e71ef9ae847636506a1b7573e2b28d9057d35b8d9fe40a0f92fe0d9c4f3e0] <==
	* 2023/05/31 18:45:43 GCP Auth Webhook started!
	2023/05/31 18:45:51 Ready to marshal response ...
	2023/05/31 18:45:51 Ready to write response ...
	2023/05/31 18:45:51 Ready to marshal response ...
	2023/05/31 18:45:51 Ready to write response ...
	2023/05/31 18:45:51 Ready to marshal response ...
	2023/05/31 18:45:51 Ready to write response ...
	2023/05/31 18:45:55 Ready to marshal response ...
	2023/05/31 18:45:55 Ready to write response ...
	2023/05/31 18:46:00 Ready to marshal response ...
	2023/05/31 18:46:00 Ready to write response ...
	2023/05/31 18:46:02 Ready to marshal response ...
	2023/05/31 18:46:02 Ready to write response ...
	2023/05/31 18:46:11 Ready to marshal response ...
	2023/05/31 18:46:11 Ready to write response ...
	2023/05/31 18:46:27 Ready to marshal response ...
	2023/05/31 18:46:27 Ready to write response ...
	2023/05/31 18:48:24 Ready to marshal response ...
	2023/05/31 18:48:24 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:48:34 up 31 min,  0 users,  load average: 0.23, 0.56, 0.29
	Linux addons-133126 5.15.0-1035-gcp #43~20.04.1-Ubuntu SMP Mon May 22 16:49:11 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [35cc5e0eb4aa8e9ce4e1cec741d18ad1bd2f094d8bd1a1e589d62af42a50b964] <==
	* I0531 18:46:30.291608       1 main.go:227] handling current node
	I0531 18:46:40.304208       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:46:40.304245       1 main.go:227] handling current node
	I0531 18:46:50.316527       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:46:50.316554       1 main.go:227] handling current node
	I0531 18:47:00.329051       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:47:00.329077       1 main.go:227] handling current node
	I0531 18:47:10.333034       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:47:10.333061       1 main.go:227] handling current node
	I0531 18:47:20.345003       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:47:20.345027       1 main.go:227] handling current node
	I0531 18:47:30.348892       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:47:30.348928       1 main.go:227] handling current node
	I0531 18:47:40.361286       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:47:40.361307       1 main.go:227] handling current node
	I0531 18:47:50.367093       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:47:50.367117       1 main.go:227] handling current node
	I0531 18:48:00.379113       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:00.379142       1 main.go:227] handling current node
	I0531 18:48:10.391428       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:10.391452       1 main.go:227] handling current node
	I0531 18:48:20.394925       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:20.394949       1 main.go:227] handling current node
	I0531 18:48:30.403623       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:48:30.403648       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [00bb6c1307fb9ef7bba04451daaf5c2adc10e5bc46c7121de740a55450282fa9] <==
	* I0531 18:46:42.678836       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0531 18:46:42.678889       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0531 18:46:42.685494       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0531 18:46:42.685620       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0531 18:46:42.694608       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0531 18:46:42.694736       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0531 18:46:42.695508       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0531 18:46:42.695649       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0531 18:46:42.705806       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0531 18:46:42.705861       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0531 18:46:42.711026       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0531 18:46:42.711073       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0531 18:46:42.743814       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0531 18:46:42.743870       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0531 18:46:42.753087       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0531 18:46:42.753129       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0531 18:46:43.695853       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0531 18:46:43.753511       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0531 18:46:43.760201       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0531 18:47:17.090515       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0531 18:47:17.090537       1 handler_proxy.go:100] no RequestInfo found in the context
	E0531 18:47:17.090573       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0531 18:47:17.090579       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0531 18:48:24.757998       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.108.222.253]
	
	* 
	* ==> kube-controller-manager [ca9603416772c4a3543b52b7f69eeb8f9e5e670f2121e27c745ca61582cb083a] <==
	* I0531 18:47:07.059522       1 shared_informer.go:318] Caches are synced for garbage collector
	W0531 18:47:16.969913       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:16.969943       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:47:19.182344       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:19.182378       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:47:20.223909       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:20.223941       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:47:25.953501       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:25.953535       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:47:42.754785       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:42.754813       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:47:48.049480       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:48.049517       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:47:48.697056       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:47:48.697089       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:48:19.044566       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:48:19.044596       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0531 18:48:24.594091       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0531 18:48:24.605361       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-c4k7t"
	I0531 18:48:26.268387       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0531 18:48:26.274501       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0531 18:48:26.787330       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:48:26.787359       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0531 18:48:33.302730       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0531 18:48:33.302768       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [3b42f06962920ae6b06afccdacb42df6815ffb1104dd169ff3a4c02d36cded74] <==
	* I0531 18:44:40.660177       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0531 18:44:40.660424       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0531 18:44:40.660504       1 server_others.go:551] "Using iptables proxy"
	I0531 18:44:41.645122       1 server_others.go:190] "Using iptables Proxier"
	I0531 18:44:41.645165       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 18:44:41.645176       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0531 18:44:41.645202       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0531 18:44:41.645241       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 18:44:41.646452       1 server.go:657] "Version info" version="v1.27.2"
	I0531 18:44:41.646476       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 18:44:41.648066       1 config.go:188] "Starting service config controller"
	I0531 18:44:41.651203       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0531 18:44:41.648621       1 config.go:97] "Starting endpoint slice config controller"
	I0531 18:44:41.649029       1 config.go:315] "Starting node config controller"
	I0531 18:44:41.651327       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0531 18:44:41.651402       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0531 18:44:41.751639       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0531 18:44:41.751737       1 shared_informer.go:318] Caches are synced for node config
	I0531 18:44:41.751784       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [cbaffbd05bc05695aaaf5fc183a66921f669b59d115e9ca2898912dda0a6c43c] <==
	* W0531 18:44:21.654978       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:44:21.655019       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:44:21.655092       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:44:21.655129       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:44:21.655257       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:44:21.655302       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 18:44:21.655385       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:44:21.655423       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 18:44:22.488693       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:44:22.488727       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 18:44:22.563209       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:44:22.563236       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:44:22.573512       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:44:22.573562       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:44:22.624647       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:44:22.624673       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 18:44:22.682583       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:44:22.682623       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:44:22.684687       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:44:22.684715       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:44:22.702849       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:44:22.702878       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 18:44:22.714063       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:44:22.714095       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0531 18:44:25.347761       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* May 31 18:48:25 addons-133126 kubelet[1559]: W0531 18:48:25.251148    1559 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bef777d9462baa5aecaec91590420f1aaa844b165b3a4b61005f8108c4b76d5d/crio/crio-fc348a9effc8c7d8d794afda5d2cd01859ed74fe74a759b20a12bcc0d6db7237 WatchSource:0}: Error finding container fc348a9effc8c7d8d794afda5d2cd01859ed74fe74a759b20a12bcc0d6db7237: Status 404 returned error can't find the container with id fc348a9effc8c7d8d794afda5d2cd01859ed74fe74a759b20a12bcc0d6db7237
	May 31 18:48:25 addons-133126 kubelet[1559]: I0531 18:48:25.544478    1559 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc5gj\" (UniqueName: \"kubernetes.io/projected/d47140c0-a5ee-4d38-ae53-c0b931442ca1-kube-api-access-zc5gj\") pod \"d47140c0-a5ee-4d38-ae53-c0b931442ca1\" (UID: \"d47140c0-a5ee-4d38-ae53-c0b931442ca1\") "
	May 31 18:48:25 addons-133126 kubelet[1559]: I0531 18:48:25.546174    1559 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d47140c0-a5ee-4d38-ae53-c0b931442ca1-kube-api-access-zc5gj" (OuterVolumeSpecName: "kube-api-access-zc5gj") pod "d47140c0-a5ee-4d38-ae53-c0b931442ca1" (UID: "d47140c0-a5ee-4d38-ae53-c0b931442ca1"). InnerVolumeSpecName "kube-api-access-zc5gj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 31 18:48:25 addons-133126 kubelet[1559]: I0531 18:48:25.645035    1559 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zc5gj\" (UniqueName: \"kubernetes.io/projected/d47140c0-a5ee-4d38-ae53-c0b931442ca1-kube-api-access-zc5gj\") on node \"addons-133126\" DevicePath \"\""
	May 31 18:48:25 addons-133126 kubelet[1559]: I0531 18:48:25.731181    1559 scope.go:115] "RemoveContainer" containerID="093285aa8ebc9b3cde33fd87acc043a3b0ad738712445ec9b0b175b252c32eb7"
	May 31 18:48:25 addons-133126 kubelet[1559]: I0531 18:48:25.749916    1559 scope.go:115] "RemoveContainer" containerID="093285aa8ebc9b3cde33fd87acc043a3b0ad738712445ec9b0b175b252c32eb7"
	May 31 18:48:25 addons-133126 kubelet[1559]: E0531 18:48:25.750362    1559 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"093285aa8ebc9b3cde33fd87acc043a3b0ad738712445ec9b0b175b252c32eb7\": container with ID starting with 093285aa8ebc9b3cde33fd87acc043a3b0ad738712445ec9b0b175b252c32eb7 not found: ID does not exist" containerID="093285aa8ebc9b3cde33fd87acc043a3b0ad738712445ec9b0b175b252c32eb7"
	May 31 18:48:25 addons-133126 kubelet[1559]: I0531 18:48:25.750415    1559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:093285aa8ebc9b3cde33fd87acc043a3b0ad738712445ec9b0b175b252c32eb7} err="failed to get container status \"093285aa8ebc9b3cde33fd87acc043a3b0ad738712445ec9b0b175b252c32eb7\": rpc error: code = NotFound desc = could not find container \"093285aa8ebc9b3cde33fd87acc043a3b0ad738712445ec9b0b175b252c32eb7\": container with ID starting with 093285aa8ebc9b3cde33fd87acc043a3b0ad738712445ec9b0b175b252c32eb7 not found: ID does not exist"
	May 31 18:48:26 addons-133126 kubelet[1559]: E0531 18:48:26.342162    1559 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-858bcd4f57-2n5hb.17644ee5110905f7", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-858bcd4f57-2n5hb", UID:"32f89ceb-291f-4619-a2a4-869ebec8127e", APIVersion:"v1", ResourceVersion:"780", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-133126"}, FirstTimestamp:time.Date(2023, time.May, 31, 18, 48, 26, 283689463, time.Local), LastTimestamp:time.Date(2023, time.May, 31, 18, 48, 26, 283689463, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-858bcd4f57-2n5hb.17644ee5110905f7" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 31 18:48:26 addons-133126 kubelet[1559]: I0531 18:48:26.745486    1559 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-65bdb79f98-c4k7t" podStartSLOduration=1.628237385 podCreationTimestamp="2023-05-31 18:48:24 +0000 UTC" firstStartedPulling="2023-05-31 18:48:25.279543822 +0000 UTC m=+240.609082219" lastFinishedPulling="2023-05-31 18:48:26.396745967 +0000 UTC m=+241.726284362" observedRunningTime="2023-05-31 18:48:26.745240013 +0000 UTC m=+242.074778419" watchObservedRunningTime="2023-05-31 18:48:26.745439528 +0000 UTC m=+242.074977935"
	May 31 18:48:26 addons-133126 kubelet[1559]: I0531 18:48:26.762413    1559 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=3889c9e7-f76c-4ede-9822-be417a4f1eec path="/var/lib/kubelet/pods/3889c9e7-f76c-4ede-9822-be417a4f1eec/volumes"
	May 31 18:48:26 addons-133126 kubelet[1559]: I0531 18:48:26.762776    1559 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=5d608611-8e48-4476-8cc6-5ee8cb98d512 path="/var/lib/kubelet/pods/5d608611-8e48-4476-8cc6-5ee8cb98d512/volumes"
	May 31 18:48:26 addons-133126 kubelet[1559]: I0531 18:48:26.763096    1559 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d47140c0-a5ee-4d38-ae53-c0b931442ca1 path="/var/lib/kubelet/pods/d47140c0-a5ee-4d38-ae53-c0b931442ca1/volumes"
	May 31 18:48:27 addons-133126 kubelet[1559]: I0531 18:48:27.660389    1559 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32f89ceb-291f-4619-a2a4-869ebec8127e-webhook-cert\") pod \"32f89ceb-291f-4619-a2a4-869ebec8127e\" (UID: \"32f89ceb-291f-4619-a2a4-869ebec8127e\") "
	May 31 18:48:27 addons-133126 kubelet[1559]: I0531 18:48:27.660443    1559 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7rs4\" (UniqueName: \"kubernetes.io/projected/32f89ceb-291f-4619-a2a4-869ebec8127e-kube-api-access-h7rs4\") pod \"32f89ceb-291f-4619-a2a4-869ebec8127e\" (UID: \"32f89ceb-291f-4619-a2a4-869ebec8127e\") "
	May 31 18:48:27 addons-133126 kubelet[1559]: I0531 18:48:27.662224    1559 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f89ceb-291f-4619-a2a4-869ebec8127e-kube-api-access-h7rs4" (OuterVolumeSpecName: "kube-api-access-h7rs4") pod "32f89ceb-291f-4619-a2a4-869ebec8127e" (UID: "32f89ceb-291f-4619-a2a4-869ebec8127e"). InnerVolumeSpecName "kube-api-access-h7rs4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 31 18:48:27 addons-133126 kubelet[1559]: I0531 18:48:27.662409    1559 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f89ceb-291f-4619-a2a4-869ebec8127e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "32f89ceb-291f-4619-a2a4-869ebec8127e" (UID: "32f89ceb-291f-4619-a2a4-869ebec8127e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 31 18:48:27 addons-133126 kubelet[1559]: I0531 18:48:27.739749    1559 scope.go:115] "RemoveContainer" containerID="b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb"
	May 31 18:48:27 addons-133126 kubelet[1559]: I0531 18:48:27.756710    1559 scope.go:115] "RemoveContainer" containerID="b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb"
	May 31 18:48:27 addons-133126 kubelet[1559]: E0531 18:48:27.757121    1559 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb\": container with ID starting with b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb not found: ID does not exist" containerID="b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb"
	May 31 18:48:27 addons-133126 kubelet[1559]: I0531 18:48:27.757169    1559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb} err="failed to get container status \"b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb\": rpc error: code = NotFound desc = could not find container \"b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb\": container with ID starting with b66b90f4dd8277fd854ed6abe64108d67d30c74f6614e1a7c7d95da4bea6abbb not found: ID does not exist"
	May 31 18:48:27 addons-133126 kubelet[1559]: I0531 18:48:27.760844    1559 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32f89ceb-291f-4619-a2a4-869ebec8127e-webhook-cert\") on node \"addons-133126\" DevicePath \"\""
	May 31 18:48:27 addons-133126 kubelet[1559]: I0531 18:48:27.760883    1559 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h7rs4\" (UniqueName: \"kubernetes.io/projected/32f89ceb-291f-4619-a2a4-869ebec8127e-kube-api-access-h7rs4\") on node \"addons-133126\" DevicePath \"\""
	May 31 18:48:28 addons-133126 kubelet[1559]: I0531 18:48:28.761791    1559 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=32f89ceb-291f-4619-a2a4-869ebec8127e path="/var/lib/kubelet/pods/32f89ceb-291f-4619-a2a4-869ebec8127e/volumes"
	May 31 18:48:34 addons-133126 kubelet[1559]: W0531 18:48:34.095124    1559 container.go:586] Failed to update stats for container "/crio/crio-e07a970ae6aa90f6dfff1ffd8e7c8f86d6c01e5d7054d4f9876ffb65215c5609": unable to determine device info for dir: /var/lib/containers/storage/overlay/20911d686e4845604b717d5f0e314550e1b5f2a60a875ce5365b8321936baf9a/diff: stat failed on /var/lib/containers/storage/overlay/20911d686e4845604b717d5f0e314550e1b5f2a60a875ce5365b8321936baf9a/diff with error: no such file or directory, continuing to push stats
	
	* 
	* ==> storage-provisioner [3d1f01fb157b182b94e336a291f95ab96ea746e499311fcc2b47a9ac991dc28d] <==
	* I0531 18:45:11.277118       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:45:11.283764       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:45:11.283862       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:45:11.289291       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:45:11.289452       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-133126_4f32dbb2-5655-4226-b264-649ebff2e8ab!
	I0531 18:45:11.289456       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"44f692b6-1365-40b9-9479-82c8403e28af", APIVersion:"v1", ResourceVersion:"852", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-133126_4f32dbb2-5655-4226-b264-649ebff2e8ab became leader
	I0531 18:45:11.390429       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-133126_4f32dbb2-5655-4226-b264-649ebff2e8ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-133126 -n addons-133126
helpers_test.go:261: (dbg) Run:  kubectl --context addons-133126 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744804 ssh pgrep buildkitd: exit status 1 (292.192448ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image build -t localhost/my-image:functional-744804 testdata/build --alsologtostderr
functional_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p functional-744804 image build -t localhost/my-image:functional-744804 testdata/build --alsologtostderr: (5.875850241s)
functional_test.go:318: (dbg) Stdout: out/minikube-linux-amd64 -p functional-744804 image build -t localhost/my-image:functional-744804 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8738e1a38d2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-744804
--> ea2d20a6383
Successfully tagged localhost/my-image:functional-744804
ea2d20a6383a1bc962f14e1436b97b678676dab76b1c62ced406f9be4f76c565
functional_test.go:321: (dbg) Stderr: out/minikube-linux-amd64 -p functional-744804 image build -t localhost/my-image:functional-744804 testdata/build --alsologtostderr:
I0531 18:52:43.189298   50444 out.go:296] Setting OutFile to fd 1 ...
I0531 18:52:43.189444   50444 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:52:43.189450   50444 out.go:309] Setting ErrFile to fd 2...
I0531 18:52:43.189456   50444 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:52:43.189620   50444 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
I0531 18:52:43.190256   50444 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:52:43.190897   50444 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:52:43.191388   50444 cli_runner.go:164] Run: docker container inspect functional-744804 --format={{.State.Status}}
I0531 18:52:43.208682   50444 ssh_runner.go:195] Run: systemctl --version
I0531 18:52:43.208753   50444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744804
I0531 18:52:43.224962   50444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/functional-744804/id_rsa Username:docker}
I0531 18:52:43.346872   50444 build_images.go:151] Building image from path: /tmp/build.1653881543.tar
I0531 18:52:43.346936   50444 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0531 18:52:43.356592   50444 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1653881543.tar
I0531 18:52:43.360691   50444 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1653881543.tar: stat -c "%s %y" /var/lib/minikube/build/build.1653881543.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1653881543.tar': No such file or directory
I0531 18:52:43.360733   50444 ssh_runner.go:362] scp /tmp/build.1653881543.tar --> /var/lib/minikube/build/build.1653881543.tar (3072 bytes)
I0531 18:52:43.460516   50444 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1653881543
I0531 18:52:43.471469   50444 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1653881543 -xf /var/lib/minikube/build/build.1653881543.tar
I0531 18:52:43.544547   50444 crio.go:297] Building image: /var/lib/minikube/build/build.1653881543
I0531 18:52:43.544634   50444 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-744804 /var/lib/minikube/build/build.1653881543 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0531 18:52:48.997005   50444 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-744804 /var/lib/minikube/build/build.1653881543 --cgroup-manager=cgroupfs: (5.452343522s)
I0531 18:52:48.997066   50444 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1653881543
I0531 18:52:49.005567   50444 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1653881543.tar
I0531 18:52:49.013561   50444 build_images.go:207] Built localhost/my-image:functional-744804 from /tmp/build.1653881543.tar
I0531 18:52:49.013593   50444 build_images.go:123] succeeded building to: functional-744804
I0531 18:52:49.013599   50444 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image ls
functional_test.go:446: (dbg) Done: out/minikube-linux-amd64 -p functional-744804 image ls: (2.264211221s)
functional_test.go:441: expected "localhost/my-image:functional-744804" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (8.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (183.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-466444 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-466444 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (18.319864051s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-466444 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-466444 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cf6c230f-70b1-4485-9da6-d9fe84592eff] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cf6c230f-70b1-4485-9da6-d9fe84592eff] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.006072695s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466444 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0531 18:55:50.615050   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:56:18.299779   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-466444 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.66393999s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-466444 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466444 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.015186953s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466444 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466444 addons disable ingress --alsologtostderr -v=1
E0531 18:57:17.964463   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 18:57:17.969708   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 18:57:17.979951   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 18:57:18.000193   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 18:57:18.040541   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 18:57:18.120890   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 18:57:18.281292   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 18:57:18.601885   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 18:57:19.242822   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 18:57:20.523542   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-466444 addons disable ingress --alsologtostderr -v=1: (7.201715507s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-466444
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-466444:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d78a8ba775bba118c5ff1e0afc5ae2e0a3c3dd9fcaf30c94e45d99a82495dc3",
	        "Created": "2023-05-31T18:53:20.464974189Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52731,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T18:53:20.754349647Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f246fffc476e503eec088cb85bddb7b217288054dd7e1375d4f95eca27f4bce3",
	        "ResolvConfPath": "/var/lib/docker/containers/5d78a8ba775bba118c5ff1e0afc5ae2e0a3c3dd9fcaf30c94e45d99a82495dc3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d78a8ba775bba118c5ff1e0afc5ae2e0a3c3dd9fcaf30c94e45d99a82495dc3/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d78a8ba775bba118c5ff1e0afc5ae2e0a3c3dd9fcaf30c94e45d99a82495dc3/hosts",
	        "LogPath": "/var/lib/docker/containers/5d78a8ba775bba118c5ff1e0afc5ae2e0a3c3dd9fcaf30c94e45d99a82495dc3/5d78a8ba775bba118c5ff1e0afc5ae2e0a3c3dd9fcaf30c94e45d99a82495dc3-json.log",
	        "Name": "/ingress-addon-legacy-466444",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-466444:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-466444",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d83dfe56d5bebba065e7ddceceef9aaaf978700d595f1017ab76090c688abdbc-init/diff:/var/lib/docker/overlay2/ff5bbba96769eca5d0c1a4ffdb04787b9f84aae4dcd4bc9929a365a3d058b20f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d83dfe56d5bebba065e7ddceceef9aaaf978700d595f1017ab76090c688abdbc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d83dfe56d5bebba065e7ddceceef9aaaf978700d595f1017ab76090c688abdbc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d83dfe56d5bebba065e7ddceceef9aaaf978700d595f1017ab76090c688abdbc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-466444",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-466444/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-466444",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-466444",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-466444",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5b1c9a1f9540d24bf23c00bf54308542c21c77e98e9635c55d2ebafea22467c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d5b1c9a1f954",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-466444": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5d78a8ba775b",
	                        "ingress-addon-legacy-466444"
	                    ],
	                    "NetworkID": "635526c1abe08594cff6f2804c2534e874315aca285b057c41b1daac1e0f6749",
	                    "EndpointID": "6af47044469a200e74d159705e2f916476bd0268c1edb72cddde93f17e230fab",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-466444 -n ingress-addon-legacy-466444
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466444 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-466444 logs -n 25: (1.03360469s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-744804 image load                                                 | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-744804 image ls                                                   | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	| image          | functional-744804 image save --daemon                                        | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-744804                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| service        | functional-744804 service                                                    | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | hello-node-connect --url                                                     |                             |         |         |                     |                     |
	| addons         | functional-744804 addons list                                                | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	| addons         | functional-744804 addons list                                                | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | -o json                                                                      |                             |         |         |                     |                     |
	| ssh            | functional-744804 ssh sudo cat                                               | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | /etc/test/nested/copy/14232/hosts                                            |                             |         |         |                     |                     |
	| update-context | functional-744804                                                            | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-744804                                                            | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-744804                                                            | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| image          | functional-744804                                                            | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | image ls --format short                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-744804                                                            | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh            | functional-744804 ssh pgrep                                                  | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC |                     |
	|                | buildkitd                                                                    |                             |         |         |                     |                     |
	| image          | functional-744804 image build -t                                             | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | localhost/my-image:functional-744804                                         |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image          | functional-744804                                                            | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | image ls --format json                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-744804                                                            | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	|                | image ls --format table                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-744804 image ls                                                   | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:52 UTC | 31 May 23 18:52 UTC |
	| delete         | -p functional-744804                                                         | functional-744804           | jenkins | v1.30.1 | 31 May 23 18:53 UTC | 31 May 23 18:53 UTC |
	| start          | -p ingress-addon-legacy-466444                                               | ingress-addon-legacy-466444 | jenkins | v1.30.1 | 31 May 23 18:53 UTC | 31 May 23 18:54 UTC |
	|                | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                     |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-466444                                                  | ingress-addon-legacy-466444 | jenkins | v1.30.1 | 31 May 23 18:54 UTC | 31 May 23 18:54 UTC |
	|                | addons enable ingress                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-466444                                                  | ingress-addon-legacy-466444 | jenkins | v1.30.1 | 31 May 23 18:54 UTC | 31 May 23 18:54 UTC |
	|                | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-466444                                                  | ingress-addon-legacy-466444 | jenkins | v1.30.1 | 31 May 23 18:54 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-466444 ip                                               | ingress-addon-legacy-466444 | jenkins | v1.30.1 | 31 May 23 18:56 UTC | 31 May 23 18:56 UTC |
	| addons         | ingress-addon-legacy-466444                                                  | ingress-addon-legacy-466444 | jenkins | v1.30.1 | 31 May 23 18:57 UTC | 31 May 23 18:57 UTC |
	|                | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-466444                                                  | ingress-addon-legacy-466444 | jenkins | v1.30.1 | 31 May 23 18:57 UTC | 31 May 23 18:57 UTC |
	|                | addons disable ingress                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 18:53:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:53:06.414472   52116 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:53:06.414639   52116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:53:06.414647   52116 out.go:309] Setting ErrFile to fd 2...
	I0531 18:53:06.414651   52116 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:53:06.414757   52116 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	I0531 18:53:06.415329   52116 out.go:303] Setting JSON to false
	I0531 18:53:06.416613   52116 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2136,"bootTime":1685557051,"procs":513,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:53:06.416708   52116 start.go:137] virtualization: kvm guest
	I0531 18:53:06.419520   52116 out.go:177] * [ingress-addon-legacy-466444] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:53:06.421157   52116 notify.go:220] Checking for updates...
	I0531 18:53:06.421183   52116 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 18:53:06.423093   52116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:53:06.424954   52116 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 18:53:06.426705   52116 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	I0531 18:53:06.428409   52116 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:53:06.429968   52116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:53:06.431696   52116 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 18:53:06.452608   52116 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:53:06.452725   52116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:53:06.496903   52116 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-05-31 18:53:06.488664621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 18:53:06.496999   52116 docker.go:294] overlay module found
	I0531 18:53:06.499318   52116 out.go:177] * Using the docker driver based on user configuration
	I0531 18:53:06.500897   52116 start.go:297] selected driver: docker
	I0531 18:53:06.500909   52116 start.go:875] validating driver "docker" against <nil>
	I0531 18:53:06.500918   52116 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:53:06.501632   52116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:53:06.548520   52116 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-05-31 18:53:06.540116049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 18:53:06.548676   52116 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0531 18:53:06.548929   52116 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:53:06.551358   52116 out.go:177] * Using Docker driver with root privileges
	I0531 18:53:06.553092   52116 cni.go:84] Creating CNI manager for ""
	I0531 18:53:06.553113   52116 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:53:06.553129   52116 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 18:53:06.553143   52116 start_flags.go:319] config:
	{Name:ingress-addon-legacy-466444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-466444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:53:06.555071   52116 out.go:177] * Starting control plane node ingress-addon-legacy-466444 in cluster ingress-addon-legacy-466444
	I0531 18:53:06.556777   52116 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 18:53:06.558365   52116 out.go:177] * Pulling base image ...
	I0531 18:53:06.559822   52116 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0531 18:53:06.559924   52116 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 18:53:06.577446   52116 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 18:53:06.577471   52116 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0531 18:53:06.610451   52116 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0531 18:53:06.610480   52116 cache.go:57] Caching tarball of preloaded images
	I0531 18:53:06.610623   52116 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0531 18:53:06.612906   52116 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0531 18:53:06.614794   52116 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0531 18:53:06.645689   52116 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0531 18:53:12.334153   52116 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0531 18:53:12.334273   52116 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0531 18:53:13.287844   52116 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0531 18:53:13.288214   52116 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/config.json ...
	I0531 18:53:13.288254   52116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/config.json: {Name:mk1c7b143bb1b5e0d6e0d438acf1461f6fc92c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:53:13.288462   52116 cache.go:195] Successfully downloaded all kic artifacts
	I0531 18:53:13.288501   52116 start.go:364] acquiring machines lock for ingress-addon-legacy-466444: {Name:mk434af0803e889ff3309ff6b98e845b9d1670e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:53:13.288563   52116 start.go:368] acquired machines lock for "ingress-addon-legacy-466444" in 47.322µs
	I0531 18:53:13.288585   52116 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-466444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-466444 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:53:13.288671   52116 start.go:125] createHost starting for "" (driver="docker")
	I0531 18:53:13.291523   52116 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0531 18:53:13.291790   52116 start.go:159] libmachine.API.Create for "ingress-addon-legacy-466444" (driver="docker")
	I0531 18:53:13.291825   52116 client.go:168] LocalClient.Create starting
	I0531 18:53:13.291990   52116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem
	I0531 18:53:13.292040   52116 main.go:141] libmachine: Decoding PEM data...
	I0531 18:53:13.292067   52116 main.go:141] libmachine: Parsing certificate...
	I0531 18:53:13.292144   52116 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem
	I0531 18:53:13.292173   52116 main.go:141] libmachine: Decoding PEM data...
	I0531 18:53:13.292190   52116 main.go:141] libmachine: Parsing certificate...
	I0531 18:53:13.292561   52116 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-466444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 18:53:13.308025   52116 cli_runner.go:211] docker network inspect ingress-addon-legacy-466444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 18:53:13.308091   52116 network_create.go:281] running [docker network inspect ingress-addon-legacy-466444] to gather additional debugging logs...
	I0531 18:53:13.308113   52116 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-466444
	W0531 18:53:13.323501   52116 cli_runner.go:211] docker network inspect ingress-addon-legacy-466444 returned with exit code 1
	I0531 18:53:13.323543   52116 network_create.go:284] error running [docker network inspect ingress-addon-legacy-466444]: docker network inspect ingress-addon-legacy-466444: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-466444 not found
	I0531 18:53:13.323565   52116 network_create.go:286] output of [docker network inspect ingress-addon-legacy-466444]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-466444 not found
	
	** /stderr **
	I0531 18:53:13.323627   52116 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:53:13.340173   52116 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000dbed20}
	I0531 18:53:13.340207   52116 network_create.go:123] attempt to create docker network ingress-addon-legacy-466444 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0531 18:53:13.340248   52116 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-466444 ingress-addon-legacy-466444
	I0531 18:53:13.392050   52116 network_create.go:107] docker network ingress-addon-legacy-466444 192.168.49.0/24 created
	I0531 18:53:13.392084   52116 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-466444" container
	I0531 18:53:13.392151   52116 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 18:53:13.407525   52116 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-466444 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-466444 --label created_by.minikube.sigs.k8s.io=true
	I0531 18:53:13.424450   52116 oci.go:103] Successfully created a docker volume ingress-addon-legacy-466444
	I0531 18:53:13.424528   52116 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-466444-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-466444 --entrypoint /usr/bin/test -v ingress-addon-legacy-466444:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0531 18:53:15.156786   52116 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-466444-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-466444 --entrypoint /usr/bin/test -v ingress-addon-legacy-466444:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib: (1.732208839s)
	I0531 18:53:15.156815   52116 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-466444
	I0531 18:53:15.156840   52116 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0531 18:53:15.156862   52116 kic.go:190] Starting extracting preloaded images to volume ...
	I0531 18:53:15.156920   52116 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-466444:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 18:53:20.403806   52116 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-466444:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.246837335s)
	I0531 18:53:20.403837   52116 kic.go:199] duration metric: took 5.246972 seconds to extract preloaded images to volume
	W0531 18:53:20.403966   52116 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 18:53:20.404043   52116 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 18:53:20.450564   52116 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-466444 --name ingress-addon-legacy-466444 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-466444 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-466444 --network ingress-addon-legacy-466444 --ip 192.168.49.2 --volume ingress-addon-legacy-466444:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0531 18:53:20.763732   52116 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-466444 --format={{.State.Running}}
	I0531 18:53:20.781072   52116 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-466444 --format={{.State.Status}}
	I0531 18:53:20.798538   52116 cli_runner.go:164] Run: docker exec ingress-addon-legacy-466444 stat /var/lib/dpkg/alternatives/iptables
	I0531 18:53:20.869617   52116 oci.go:144] the created container "ingress-addon-legacy-466444" has a running status.
	I0531 18:53:20.869643   52116 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/ingress-addon-legacy-466444/id_rsa...
	I0531 18:53:21.099168   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/ingress-addon-legacy-466444/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0531 18:53:21.099215   52116 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16569-7270/.minikube/machines/ingress-addon-legacy-466444/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 18:53:21.121123   52116 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-466444 --format={{.State.Status}}
	I0531 18:53:21.140606   52116 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 18:53:21.140631   52116 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-466444 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 18:53:21.253726   52116 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-466444 --format={{.State.Status}}
	I0531 18:53:21.271779   52116 machine.go:88] provisioning docker machine ...
	I0531 18:53:21.271818   52116 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-466444"
	I0531 18:53:21.271884   52116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-466444
	I0531 18:53:21.300496   52116 main.go:141] libmachine: Using SSH client type: native
	I0531 18:53:21.301058   52116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0531 18:53:21.301072   52116 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-466444 && echo "ingress-addon-legacy-466444" | sudo tee /etc/hostname
	I0531 18:53:21.464428   52116 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-466444
	
	I0531 18:53:21.464505   52116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-466444
	I0531 18:53:21.483594   52116 main.go:141] libmachine: Using SSH client type: native
	I0531 18:53:21.484171   52116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0531 18:53:21.484193   52116 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-466444' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-466444/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-466444' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:53:21.600336   52116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:53:21.600375   52116 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-7270/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-7270/.minikube}
	I0531 18:53:21.600408   52116 ubuntu.go:177] setting up certificates
	I0531 18:53:21.600422   52116 provision.go:83] configureAuth start
	I0531 18:53:21.600515   52116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-466444
	I0531 18:53:21.617161   52116 provision.go:138] copyHostCerts
	I0531 18:53:21.617199   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem
	I0531 18:53:21.617226   52116 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem, removing ...
	I0531 18:53:21.617234   52116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem
	I0531 18:53:21.617299   52116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem (1078 bytes)
	I0531 18:53:21.617368   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem
	I0531 18:53:21.617385   52116 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem, removing ...
	I0531 18:53:21.617394   52116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem
	I0531 18:53:21.617415   52116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem (1123 bytes)
	I0531 18:53:21.617465   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem
	I0531 18:53:21.617480   52116 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem, removing ...
	I0531 18:53:21.617486   52116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem
	I0531 18:53:21.617505   52116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem (1675 bytes)
	I0531 18:53:21.617547   52116 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-466444 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-466444]
	I0531 18:53:21.743682   52116 provision.go:172] copyRemoteCerts
	I0531 18:53:21.743750   52116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:53:21.743807   52116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-466444
	I0531 18:53:21.759919   52116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/ingress-addon-legacy-466444/id_rsa Username:docker}
	I0531 18:53:21.844577   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:53:21.844638   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 18:53:21.865426   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:53:21.865487   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0531 18:53:21.886169   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:53:21.886222   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:53:21.907007   52116 provision.go:86] duration metric: configureAuth took 306.565145ms
	I0531 18:53:21.907043   52116 ubuntu.go:193] setting minikube options for container-runtime
	I0531 18:53:21.907238   52116 config.go:182] Loaded profile config "ingress-addon-legacy-466444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0531 18:53:21.907330   52116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-466444
	I0531 18:53:21.924160   52116 main.go:141] libmachine: Using SSH client type: native
	I0531 18:53:21.924602   52116 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0531 18:53:21.924623   52116 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:53:22.142305   52116 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:53:22.142330   52116 machine.go:91] provisioned docker machine in 870.528181ms
	I0531 18:53:22.142338   52116 client.go:171] LocalClient.Create took 8.850507767s
	I0531 18:53:22.142354   52116 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-466444" took 8.850564497s
	I0531 18:53:22.142361   52116 start.go:300] post-start starting for "ingress-addon-legacy-466444" (driver="docker")
	I0531 18:53:22.142366   52116 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:53:22.142419   52116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:53:22.142456   52116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-466444
	I0531 18:53:22.158819   52116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/ingress-addon-legacy-466444/id_rsa Username:docker}
	I0531 18:53:22.244792   52116 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:53:22.247814   52116 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 18:53:22.247845   52116 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 18:53:22.247853   52116 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 18:53:22.247859   52116 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 18:53:22.247868   52116 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/addons for local assets ...
	I0531 18:53:22.247918   52116 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/files for local assets ...
	I0531 18:53:22.247983   52116 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> 142322.pem in /etc/ssl/certs
	I0531 18:53:22.247993   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> /etc/ssl/certs/142322.pem
	I0531 18:53:22.248080   52116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:53:22.255576   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem --> /etc/ssl/certs/142322.pem (1708 bytes)
	I0531 18:53:22.275895   52116 start.go:303] post-start completed in 133.521231ms
	I0531 18:53:22.276246   52116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-466444
	I0531 18:53:22.291690   52116 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/config.json ...
	I0531 18:53:22.291923   52116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:53:22.291960   52116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-466444
	I0531 18:53:22.307115   52116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/ingress-addon-legacy-466444/id_rsa Username:docker}
	I0531 18:53:22.388760   52116 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 18:53:22.392637   52116 start.go:128] duration metric: createHost completed in 9.103953841s
	I0531 18:53:22.392659   52116 start.go:83] releasing machines lock for "ingress-addon-legacy-466444", held for 9.104085359s
	I0531 18:53:22.392712   52116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-466444
	I0531 18:53:22.408362   52116 ssh_runner.go:195] Run: cat /version.json
	I0531 18:53:22.408413   52116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-466444
	I0531 18:53:22.408449   52116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:53:22.408502   52116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-466444
	I0531 18:53:22.424157   52116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/ingress-addon-legacy-466444/id_rsa Username:docker}
	I0531 18:53:22.424646   52116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/ingress-addon-legacy-466444/id_rsa Username:docker}
	I0531 18:53:22.588526   52116 ssh_runner.go:195] Run: systemctl --version
	I0531 18:53:22.592876   52116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:53:22.728410   52116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 18:53:22.732688   52116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:53:22.751186   52116 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 18:53:22.751269   52116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:53:22.778692   52116 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0531 18:53:22.778721   52116 start.go:481] detecting cgroup driver to use...
	I0531 18:53:22.778758   52116 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 18:53:22.778810   52116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:53:22.792100   52116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:53:22.802061   52116 docker.go:193] disabling cri-docker service (if available) ...
	I0531 18:53:22.802126   52116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:53:22.813344   52116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:53:22.825440   52116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:53:22.897007   52116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:53:22.970233   52116 docker.go:209] disabling docker service ...
	I0531 18:53:22.970285   52116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:53:22.987194   52116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:53:22.997466   52116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:53:23.069453   52116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:53:23.149267   52116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:53:23.159766   52116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:53:23.175272   52116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0531 18:53:23.175329   52116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:53:23.184077   52116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:53:23.184139   52116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:53:23.192651   52116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:53:23.200965   52116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:53:23.209677   52116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:53:23.218204   52116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:53:23.225773   52116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:53:23.233457   52116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:53:23.304130   52116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:53:23.408159   52116 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:53:23.408221   52116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:53:23.412064   52116 start.go:549] Will wait 60s for crictl version
	I0531 18:53:23.412110   52116 ssh_runner.go:195] Run: which crictl
	I0531 18:53:23.415057   52116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:53:23.446021   52116 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 18:53:23.446095   52116 ssh_runner.go:195] Run: crio --version
	I0531 18:53:23.478354   52116 ssh_runner.go:195] Run: crio --version
	I0531 18:53:23.514503   52116 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.5 ...
	I0531 18:53:23.516417   52116 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-466444 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 18:53:23.532140   52116 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0531 18:53:23.535607   52116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:53:23.545483   52116 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0531 18:53:23.545545   52116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:53:23.588778   52116 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0531 18:53:23.588833   52116 ssh_runner.go:195] Run: which lz4
	I0531 18:53:23.591956   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0531 18:53:23.592039   52116 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0531 18:53:23.594950   52116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0531 18:53:23.594970   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0531 18:53:24.540512   52116 crio.go:444] Took 0.948502 seconds to copy over tarball
	I0531 18:53:24.540566   52116 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0531 18:53:26.829714   52116 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.289119394s)
	I0531 18:53:26.829750   52116 crio.go:451] Took 2.289211 seconds to extract the tarball
	I0531 18:53:26.829761   52116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0531 18:53:26.897643   52116 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:53:26.929441   52116 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0531 18:53:26.929462   52116 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0531 18:53:26.929510   52116 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:53:26.929538   52116 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0531 18:53:26.929563   52116 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0531 18:53:26.929591   52116 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0531 18:53:26.929621   52116 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0531 18:53:26.929543   52116 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0531 18:53:26.929564   52116 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0531 18:53:26.929770   52116 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0531 18:53:26.930721   52116 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0531 18:53:26.930726   52116 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0531 18:53:26.930890   52116 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:53:26.930722   52116 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0531 18:53:26.930731   52116 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0531 18:53:26.930726   52116 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0531 18:53:26.930736   52116 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0531 18:53:26.930764   52116 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0531 18:53:27.085824   52116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0531 18:53:27.117827   52116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0531 18:53:27.120606   52116 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0531 18:53:27.120642   52116 cri.go:217] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0531 18:53:27.120678   52116 ssh_runner.go:195] Run: which crictl
	I0531 18:53:27.138061   52116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0531 18:53:27.141796   52116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0531 18:53:27.148254   52116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0531 18:53:27.154018   52116 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0531 18:53:27.154058   52116 cri.go:217] Removing image: registry.k8s.io/coredns:1.6.7
	I0531 18:53:27.154094   52116 ssh_runner.go:195] Run: which crictl
	I0531 18:53:27.154101   52116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0531 18:53:27.179779   52116 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0531 18:53:27.179823   52116 cri.go:217] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0531 18:53:27.179872   52116 ssh_runner.go:195] Run: which crictl
	I0531 18:53:27.182785   52116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0531 18:53:27.185822   52116 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0531 18:53:27.185856   52116 cri.go:217] Removing image: registry.k8s.io/pause:3.2
	I0531 18:53:27.185885   52116 ssh_runner.go:195] Run: which crictl
	I0531 18:53:27.204363   52116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0531 18:53:27.242148   52116 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:53:27.253954   52116 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0531 18:53:27.254048   52116 cri.go:217] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0531 18:53:27.254067   52116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0531 18:53:27.254110   52116 ssh_runner.go:195] Run: which crictl
	I0531 18:53:27.254201   52116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0531 18:53:27.254268   52116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0531 18:53:27.348905   52116 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0531 18:53:27.348974   52116 cri.go:217] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0531 18:53:27.349003   52116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0531 18:53:27.349012   52116 ssh_runner.go:195] Run: which crictl
	I0531 18:53:27.352088   52116 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0531 18:53:27.352158   52116 cri.go:217] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0531 18:53:27.352212   52116 ssh_runner.go:195] Run: which crictl
	I0531 18:53:27.459909   52116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0531 18:53:27.459986   52116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0531 18:53:27.460092   52116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0531 18:53:27.460124   52116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0531 18:53:27.460156   52116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0531 18:53:27.460232   52116 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0531 18:53:27.496194   52116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0531 18:53:27.496232   52116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0531 18:53:27.497471   52116 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0531 18:53:27.497511   52116 cache_images.go:92] LoadImages completed in 568.040155ms
	W0531 18:53:27.497575   52116 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0531 18:53:27.497628   52116 ssh_runner.go:195] Run: crio config
	I0531 18:53:27.537329   52116 cni.go:84] Creating CNI manager for ""
	I0531 18:53:27.537346   52116 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:53:27.537356   52116 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 18:53:27.537373   52116 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-466444 NodeName:ingress-addon-legacy-466444 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0531 18:53:27.537496   52116 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-466444"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:53:27.537591   52116 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-466444 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-466444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 18:53:27.537644   52116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0531 18:53:27.545310   52116 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:53:27.545374   52116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 18:53:27.552736   52116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0531 18:53:27.567754   52116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0531 18:53:27.582970   52116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0531 18:53:27.597869   52116 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0531 18:53:27.601251   52116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:53:27.610480   52116 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444 for IP: 192.168.49.2
	I0531 18:53:27.610513   52116 certs.go:190] acquiring lock for shared ca certs: {Name:mkbc42e9eaddef0752bd9f3cb948d1ed478bdf0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:53:27.610641   52116 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key
	I0531 18:53:27.610703   52116 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key
	I0531 18:53:27.610742   52116 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.key
	I0531 18:53:27.610763   52116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt with IP's: []
	I0531 18:53:27.873063   52116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt ...
	I0531 18:53:27.873099   52116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: {Name:mk3de62f134e97500cb3db01a4c66b34ae6320e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:53:27.873270   52116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.key ...
	I0531 18:53:27.873281   52116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.key: {Name:mkce9b95bca37de3e5c1eebe1285f81284066c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:53:27.873349   52116 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.key.dd3b5fb2
	I0531 18:53:27.873364   52116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 18:53:27.930534   52116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.crt.dd3b5fb2 ...
	I0531 18:53:27.930576   52116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.crt.dd3b5fb2: {Name:mk655b372a92ee26878b7e5d98ea576f8cc7bd8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:53:27.930768   52116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.key.dd3b5fb2 ...
	I0531 18:53:27.930780   52116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.key.dd3b5fb2: {Name:mk9c268cd23889952d9437cf5deb40b6b481c0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:53:27.930856   52116 certs.go:337] copying /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.crt
	I0531 18:53:27.930929   52116 certs.go:341] copying /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.key
	I0531 18:53:27.930985   52116 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/proxy-client.key
	I0531 18:53:27.931000   52116 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/proxy-client.crt with IP's: []
	I0531 18:53:28.111157   52116 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/proxy-client.crt ...
	I0531 18:53:28.111191   52116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/proxy-client.crt: {Name:mkf94ca9833e8ffa591e57527018ff7815ba939a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:53:28.111353   52116 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/proxy-client.key ...
	I0531 18:53:28.111368   52116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/proxy-client.key: {Name:mk4371a8941db54917677557c23367866c893365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:53:28.111438   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:53:28.111455   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:53:28.111466   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:53:28.111481   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:53:28.111492   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:53:28.111504   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:53:28.111516   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:53:28.111526   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:53:28.111576   52116 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232.pem (1338 bytes)
	W0531 18:53:28.111611   52116 certs.go:433] ignoring /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232_empty.pem, impossibly tiny 0 bytes
	I0531 18:53:28.111625   52116 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem (1679 bytes)
	I0531 18:53:28.111650   52116 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem (1078 bytes)
	I0531 18:53:28.111676   52116 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:53:28.111705   52116 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem (1675 bytes)
	I0531 18:53:28.111744   52116 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem (1708 bytes)
	I0531 18:53:28.111770   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:53:28.111788   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232.pem -> /usr/share/ca-certificates/14232.pem
	I0531 18:53:28.111806   52116 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> /usr/share/ca-certificates/142322.pem
	I0531 18:53:28.112406   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 18:53:28.134269   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 18:53:28.154740   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:53:28.175220   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:53:28.195560   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:53:28.215059   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:53:28.236259   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:53:28.256984   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:53:28.277025   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:53:28.297196   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232.pem --> /usr/share/ca-certificates/14232.pem (1338 bytes)
	I0531 18:53:28.317271   52116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem --> /usr/share/ca-certificates/142322.pem (1708 bytes)
	I0531 18:53:28.338275   52116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 18:53:28.353184   52116 ssh_runner.go:195] Run: openssl version
	I0531 18:53:28.358052   52116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:53:28.365817   52116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:53:28.368843   52116 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:53:28.368903   52116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:53:28.374867   52116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:53:28.382606   52116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14232.pem && ln -fs /usr/share/ca-certificates/14232.pem /etc/ssl/certs/14232.pem"
	I0531 18:53:28.390503   52116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14232.pem
	I0531 18:53:28.393510   52116 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 31 18:49 /usr/share/ca-certificates/14232.pem
	I0531 18:53:28.393560   52116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14232.pem
	I0531 18:53:28.399481   52116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14232.pem /etc/ssl/certs/51391683.0"
	I0531 18:53:28.407191   52116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142322.pem && ln -fs /usr/share/ca-certificates/142322.pem /etc/ssl/certs/142322.pem"
	I0531 18:53:28.415154   52116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142322.pem
	I0531 18:53:28.418273   52116 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 31 18:49 /usr/share/ca-certificates/142322.pem
	I0531 18:53:28.418316   52116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142322.pem
	I0531 18:53:28.424448   52116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142322.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:53:28.432616   52116 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 18:53:28.435405   52116 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 18:53:28.435446   52116 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-466444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-466444 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:53:28.435515   52116 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 18:53:28.435550   52116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:53:28.466921   52116 cri.go:88] found id: ""
	I0531 18:53:28.466971   52116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:53:28.474877   52116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:53:28.482643   52116 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0531 18:53:28.482697   52116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:53:28.490048   52116 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:53:28.490121   52116 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 18:53:28.531089   52116 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0531 18:53:28.531152   52116 kubeadm.go:322] [preflight] Running pre-flight checks
	I0531 18:53:28.566848   52116 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0531 18:53:28.566945   52116 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1035-gcp
	I0531 18:53:28.566990   52116 kubeadm.go:322] OS: Linux
	I0531 18:53:28.567057   52116 kubeadm.go:322] CGROUPS_CPU: enabled
	I0531 18:53:28.567120   52116 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0531 18:53:28.567180   52116 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0531 18:53:28.567260   52116 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0531 18:53:28.567325   52116 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0531 18:53:28.567404   52116 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0531 18:53:28.633273   52116 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 18:53:28.633415   52116 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 18:53:28.633530   52116 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 18:53:28.806774   52116 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 18:53:28.807585   52116 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 18:53:28.807628   52116 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0531 18:53:28.884713   52116 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 18:53:28.888135   52116 out.go:204]   - Generating certificates and keys ...
	I0531 18:53:28.888324   52116 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0531 18:53:28.888425   52116 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0531 18:53:29.113640   52116 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 18:53:29.209469   52116 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0531 18:53:29.547779   52116 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0531 18:53:29.675068   52116 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0531 18:53:30.095920   52116 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0531 18:53:30.096071   52116 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-466444 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0531 18:53:30.426528   52116 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0531 18:53:30.426676   52116 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-466444 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0531 18:53:30.488277   52116 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 18:53:30.582682   52116 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 18:53:30.947917   52116 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0531 18:53:30.948021   52116 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 18:53:31.099730   52116 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 18:53:31.266661   52116 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 18:53:31.542486   52116 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 18:53:31.760185   52116 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 18:53:31.761518   52116 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 18:53:31.764192   52116 out.go:204]   - Booting up control plane ...
	I0531 18:53:31.764285   52116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 18:53:31.767255   52116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 18:53:31.768390   52116 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 18:53:31.769247   52116 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 18:53:31.772011   52116 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 18:53:38.274272   52116 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502278 seconds
	I0531 18:53:38.274417   52116 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 18:53:38.283991   52116 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 18:53:38.799178   52116 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0531 18:53:38.799363   52116 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-466444 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0531 18:53:39.305578   52116 kubeadm.go:322] [bootstrap-token] Using token: 6h8k7a.9xn4dmc2mnps0tx2
	I0531 18:53:39.307383   52116 out.go:204]   - Configuring RBAC rules ...
	I0531 18:53:39.307595   52116 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 18:53:39.310429   52116 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 18:53:39.318406   52116 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 18:53:39.320472   52116 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 18:53:39.322281   52116 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 18:53:39.324160   52116 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 18:53:39.330797   52116 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 18:53:39.552390   52116 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0531 18:53:39.721168   52116 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0531 18:53:39.722123   52116 kubeadm.go:322] 
	I0531 18:53:39.722198   52116 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0531 18:53:39.722205   52116 kubeadm.go:322] 
	I0531 18:53:39.722294   52116 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0531 18:53:39.722299   52116 kubeadm.go:322] 
	I0531 18:53:39.722319   52116 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0531 18:53:39.722373   52116 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 18:53:39.722457   52116 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 18:53:39.722472   52116 kubeadm.go:322] 
	I0531 18:53:39.722534   52116 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0531 18:53:39.722620   52116 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 18:53:39.722701   52116 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 18:53:39.722729   52116 kubeadm.go:322] 
	I0531 18:53:39.722842   52116 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0531 18:53:39.722943   52116 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0531 18:53:39.722953   52116 kubeadm.go:322] 
	I0531 18:53:39.723053   52116 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6h8k7a.9xn4dmc2mnps0tx2 \
	I0531 18:53:39.723181   52116 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:762176d172e4c2e2979887de61c98a5df6783b1700b9b76d8140f24ee64a7564 \
	I0531 18:53:39.723234   52116 kubeadm.go:322]     --control-plane 
	I0531 18:53:39.723272   52116 kubeadm.go:322] 
	I0531 18:53:39.723367   52116 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0531 18:53:39.723373   52116 kubeadm.go:322] 
	I0531 18:53:39.723460   52116 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6h8k7a.9xn4dmc2mnps0tx2 \
	I0531 18:53:39.723576   52116 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:762176d172e4c2e2979887de61c98a5df6783b1700b9b76d8140f24ee64a7564 
	I0531 18:53:39.725254   52116 kubeadm.go:322] W0531 18:53:28.530623    1384 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0531 18:53:39.725571   52116 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1035-gcp\n", err: exit status 1
	I0531 18:53:39.725695   52116 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 18:53:39.725873   52116 kubeadm.go:322] W0531 18:53:31.766924    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0531 18:53:39.726063   52116 kubeadm.go:322] W0531 18:53:31.768156    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0531 18:53:39.726098   52116 cni.go:84] Creating CNI manager for ""
	I0531 18:53:39.726110   52116 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:53:39.728514   52116 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:53:39.730085   52116 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:53:39.733705   52116 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0531 18:53:39.733721   52116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 18:53:39.749058   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:53:40.161102   52116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:53:40.161153   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:40.161169   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140 minikube.k8s.io/name=ingress-addon-legacy-466444 minikube.k8s.io/updated_at=2023_05_31T18_53_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:40.167688   52116 ops.go:34] apiserver oom_adj: -16
	I0531 18:53:40.270211   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:40.854216   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:41.354606   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:41.854597   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:42.354357   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:42.854027   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:43.354343   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:43.854146   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:44.354278   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:44.854085   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:45.354416   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:45.854242   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:46.354410   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:46.854518   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:47.354479   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:47.854584   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:48.354606   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:48.854029   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:49.353882   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:49.853925   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:50.353773   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:50.854227   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:51.353992   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:51.853792   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:52.354038   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:52.854623   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:53.353650   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:53.854256   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:54.354448   52116 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:53:54.453508   52116 kubeadm.go:1076] duration metric: took 14.29240758s to wait for elevateKubeSystemPrivileges.
	I0531 18:53:54.453546   52116 kubeadm.go:406] StartCluster complete in 26.01810135s
	I0531 18:53:54.453568   52116 settings.go:142] acquiring lock: {Name:mk168872ecacf1e04453fffdd7073a8caed6462b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:53:54.453643   52116 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 18:53:54.454327   52116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/kubeconfig: {Name:mk2e9ef864ed1e4aaf9a6e1bd97970840e57fe82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:53:54.454601   52116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:53:54.454722   52116 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0531 18:53:54.454797   52116 config.go:182] Loaded profile config "ingress-addon-legacy-466444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0531 18:53:54.454806   52116 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-466444"
	I0531 18:53:54.454820   52116 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-466444"
	I0531 18:53:54.454849   52116 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-466444"
	I0531 18:53:54.454825   52116 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-466444"
	I0531 18:53:54.454983   52116 host.go:66] Checking if "ingress-addon-legacy-466444" exists ...
	I0531 18:53:54.455281   52116 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-466444 --format={{.State.Status}}
	I0531 18:53:54.455256   52116 kapi.go:59] client config for ingress-addon-legacy-466444: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.key", CAFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b95a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 18:53:54.455483   52116 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-466444 --format={{.State.Status}}
	I0531 18:53:54.456166   52116 cert_rotation.go:137] Starting client certificate rotation controller
	I0531 18:53:54.475201   52116 kapi.go:59] client config for ingress-addon-legacy-466444: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.key", CAFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b95a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 18:53:54.478199   52116 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:53:54.477607   52116 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-466444"
	I0531 18:53:54.478246   52116 host.go:66] Checking if "ingress-addon-legacy-466444" exists ...
	I0531 18:53:54.480117   52116 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:53:54.480144   52116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:53:54.480187   52116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-466444
	I0531 18:53:54.480540   52116 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-466444 --format={{.State.Status}}
	I0531 18:53:54.498784   52116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/ingress-addon-legacy-466444/id_rsa Username:docker}
	I0531 18:53:54.498898   52116 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:53:54.498917   52116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:53:54.498969   52116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-466444
	I0531 18:53:54.516792   52116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/ingress-addon-legacy-466444/id_rsa Username:docker}
	I0531 18:53:54.533494   52116 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:53:54.593344   52116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:53:54.660466   52116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:53:54.859815   52116 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0531 18:53:54.981462   52116 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-466444" context rescaled to 1 replicas
	I0531 18:53:54.981512   52116 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:53:54.983669   52116 out.go:177] * Verifying Kubernetes components...
	I0531 18:53:54.986675   52116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:53:55.161524   52116 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 18:53:55.160409   52116 kapi.go:59] client config for ingress-addon-legacy-466444: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.key", CAFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b95a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 18:53:55.163263   52116 addons.go:499] enable addons completed in 708.545162ms: enabled=[storage-provisioner default-storageclass]
	I0531 18:53:55.163460   52116 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-466444" to be "Ready" ...
	I0531 18:53:57.168588   52116 node_ready.go:58] node "ingress-addon-legacy-466444" has status "Ready":"False"
	I0531 18:53:59.169120   52116 node_ready.go:58] node "ingress-addon-legacy-466444" has status "Ready":"False"
	I0531 18:54:00.193979   52116 node_ready.go:49] node "ingress-addon-legacy-466444" has status "Ready":"True"
	I0531 18:54:00.194009   52116 node_ready.go:38] duration metric: took 5.03052924s waiting for node "ingress-addon-legacy-466444" to be "Ready" ...
	I0531 18:54:00.194019   52116 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:54:00.201194   52116 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-qwh7j" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:02.453735   52116 pod_ready.go:102] pod "coredns-66bff467f8-qwh7j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-05-31 18:53:54 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0531 18:54:04.454556   52116 pod_ready.go:102] pod "coredns-66bff467f8-qwh7j" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-05-31 18:53:54 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0531 18:54:06.456754   52116 pod_ready.go:102] pod "coredns-66bff467f8-qwh7j" in "kube-system" namespace has status "Ready":"False"
	I0531 18:54:06.955900   52116 pod_ready.go:92] pod "coredns-66bff467f8-qwh7j" in "kube-system" namespace has status "Ready":"True"
	I0531 18:54:06.955925   52116 pod_ready.go:81] duration metric: took 6.754700743s waiting for pod "coredns-66bff467f8-qwh7j" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:06.955934   52116 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-466444" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:06.959909   52116 pod_ready.go:92] pod "etcd-ingress-addon-legacy-466444" in "kube-system" namespace has status "Ready":"True"
	I0531 18:54:06.959931   52116 pod_ready.go:81] duration metric: took 3.990284ms waiting for pod "etcd-ingress-addon-legacy-466444" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:06.959946   52116 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-466444" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:06.963949   52116 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-466444" in "kube-system" namespace has status "Ready":"True"
	I0531 18:54:06.963969   52116 pod_ready.go:81] duration metric: took 4.014628ms waiting for pod "kube-apiserver-ingress-addon-legacy-466444" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:06.963978   52116 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-466444" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:06.967835   52116 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-466444" in "kube-system" namespace has status "Ready":"True"
	I0531 18:54:06.967855   52116 pod_ready.go:81] duration metric: took 3.870733ms waiting for pod "kube-controller-manager-ingress-addon-legacy-466444" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:06.967867   52116 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d2frz" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:06.971527   52116 pod_ready.go:92] pod "kube-proxy-d2frz" in "kube-system" namespace has status "Ready":"True"
	I0531 18:54:06.971552   52116 pod_ready.go:81] duration metric: took 3.674322ms waiting for pod "kube-proxy-d2frz" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:06.971565   52116 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-466444" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:07.151982   52116 request.go:628] Waited for 180.331167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-466444
	I0531 18:54:07.351723   52116 request.go:628] Waited for 197.378466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-466444
	I0531 18:54:07.354589   52116 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-466444" in "kube-system" namespace has status "Ready":"True"
	I0531 18:54:07.354609   52116 pod_ready.go:81] duration metric: took 383.035713ms waiting for pod "kube-scheduler-ingress-addon-legacy-466444" in "kube-system" namespace to be "Ready" ...
	I0531 18:54:07.354620   52116 pod_ready.go:38] duration metric: took 7.160585059s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:54:07.354637   52116 api_server.go:52] waiting for apiserver process to appear ...
	I0531 18:54:07.354680   52116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:54:07.365143   52116 api_server.go:72] duration metric: took 12.38359067s to wait for apiserver process to appear ...
	I0531 18:54:07.365166   52116 api_server.go:88] waiting for apiserver healthz status ...
	I0531 18:54:07.365180   52116 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0531 18:54:07.370335   52116 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0531 18:54:07.371139   52116 api_server.go:141] control plane version: v1.18.20
	I0531 18:54:07.371160   52116 api_server.go:131] duration metric: took 5.988286ms to wait for apiserver health ...
	I0531 18:54:07.371168   52116 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:54:07.551576   52116 request.go:628] Waited for 180.338668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:54:07.557451   52116 system_pods.go:59] 8 kube-system pods found
	I0531 18:54:07.557481   52116 system_pods.go:61] "coredns-66bff467f8-qwh7j" [0bd84d2e-3b41-4418-bd4b-9251eaeaa6dd] Running
	I0531 18:54:07.557486   52116 system_pods.go:61] "etcd-ingress-addon-legacy-466444" [a57e6b19-9a51-4b7a-bce4-71b8d1a19705] Running
	I0531 18:54:07.557491   52116 system_pods.go:61] "kindnet-dwbjx" [be61d3c2-aa6d-4027-891a-bbc0390280b5] Running
	I0531 18:54:07.557495   52116 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-466444" [6c98a6a8-3282-429c-ae2f-6c5ceeabcfe0] Running
	I0531 18:54:07.557501   52116 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-466444" [4fcb90e4-5bd1-4f72-a3b0-f92568acfa80] Running
	I0531 18:54:07.557507   52116 system_pods.go:61] "kube-proxy-d2frz" [3f25be67-6a5b-4bcd-88ac-f8d8627f006b] Running
	I0531 18:54:07.557518   52116 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-466444" [8a3d1f04-0098-4026-bd14-2e17ba78543e] Running
	I0531 18:54:07.557532   52116 system_pods.go:61] "storage-provisioner" [eefa4969-5602-4d72-bd1e-ddc6deee6d6c] Running
	I0531 18:54:07.557540   52116 system_pods.go:74] duration metric: took 186.365725ms to wait for pod list to return data ...
	I0531 18:54:07.557552   52116 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:54:07.751975   52116 request.go:628] Waited for 194.351745ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 18:54:07.754471   52116 default_sa.go:45] found service account: "default"
	I0531 18:54:07.754496   52116 default_sa.go:55] duration metric: took 196.935278ms for default service account to be created ...
	I0531 18:54:07.754506   52116 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 18:54:07.951961   52116 request.go:628] Waited for 197.383788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:54:07.956942   52116 system_pods.go:86] 8 kube-system pods found
	I0531 18:54:07.956974   52116 system_pods.go:89] "coredns-66bff467f8-qwh7j" [0bd84d2e-3b41-4418-bd4b-9251eaeaa6dd] Running
	I0531 18:54:07.956983   52116 system_pods.go:89] "etcd-ingress-addon-legacy-466444" [a57e6b19-9a51-4b7a-bce4-71b8d1a19705] Running
	I0531 18:54:07.956990   52116 system_pods.go:89] "kindnet-dwbjx" [be61d3c2-aa6d-4027-891a-bbc0390280b5] Running
	I0531 18:54:07.956996   52116 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-466444" [6c98a6a8-3282-429c-ae2f-6c5ceeabcfe0] Running
	I0531 18:54:07.957006   52116 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-466444" [4fcb90e4-5bd1-4f72-a3b0-f92568acfa80] Running
	I0531 18:54:07.957019   52116 system_pods.go:89] "kube-proxy-d2frz" [3f25be67-6a5b-4bcd-88ac-f8d8627f006b] Running
	I0531 18:54:07.957028   52116 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-466444" [8a3d1f04-0098-4026-bd14-2e17ba78543e] Running
	I0531 18:54:07.957035   52116 system_pods.go:89] "storage-provisioner" [eefa4969-5602-4d72-bd1e-ddc6deee6d6c] Running
	I0531 18:54:07.957041   52116 system_pods.go:126] duration metric: took 202.531206ms to wait for k8s-apps to be running ...
	I0531 18:54:07.957050   52116 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 18:54:07.957097   52116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:54:07.967920   52116 system_svc.go:56] duration metric: took 10.859107ms WaitForService to wait for kubelet.
	I0531 18:54:07.967956   52116 kubeadm.go:581] duration metric: took 12.986402438s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 18:54:07.967981   52116 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:54:08.151430   52116 request.go:628] Waited for 183.360928ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0531 18:54:08.154002   52116 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0531 18:54:08.154026   52116 node_conditions.go:123] node cpu capacity is 8
	I0531 18:54:08.154037   52116 node_conditions.go:105] duration metric: took 186.050402ms to run NodePressure ...
	I0531 18:54:08.154046   52116 start.go:228] waiting for startup goroutines ...
	I0531 18:54:08.154052   52116 start.go:233] waiting for cluster config update ...
	I0531 18:54:08.154060   52116 start.go:242] writing updated cluster config ...
	I0531 18:54:08.154317   52116 ssh_runner.go:195] Run: rm -f paused
	I0531 18:54:08.199380   52116 start.go:573] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0531 18:54:08.201990   52116 out.go:177] 
	W0531 18:54:08.204017   52116 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0531 18:54:08.205694   52116 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0531 18:54:08.207448   52116 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-466444" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* May 31 18:56:59 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:56:59.998624533Z" level=info msg="Created container 71508b52cf062a7064a8225a244347f49e3d7087bfed978fe63c27f15e7f195f: default/hello-world-app-5f5d8b66bb-mcgr4/hello-world-app" id=f1216ece-0594-4320-999c-c9a66edd39b0 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	May 31 18:56:59 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:56:59.999159977Z" level=info msg="Starting container: 71508b52cf062a7064a8225a244347f49e3d7087bfed978fe63c27f15e7f195f" id=7905e1dd-5296-4cfe-a23d-f70503a08f74 name=/runtime.v1alpha2.RuntimeService/StartContainer
	May 31 18:57:00 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:00.008160385Z" level=info msg="Started container" PID=4695 containerID=71508b52cf062a7064a8225a244347f49e3d7087bfed978fe63c27f15e7f195f description=default/hello-world-app-5f5d8b66bb-mcgr4/hello-world-app id=7905e1dd-5296-4cfe-a23d-f70503a08f74 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=94320ac03fd56bdcdf4b9a3a4913b80c482bd19aa58d9829f87b1d3b0096a4e2
	May 31 18:57:09 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:09.905305072Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=09acc698-a44f-47a9-bd88-c5b53fb4510a name=/runtime.v1alpha2.ImageService/ImageStatus
	May 31 18:57:13 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:13.905708746Z" level=info msg="Stopping pod sandbox: ee0a5eedc7c6920e82a3f268b7db79005eccaacd615dd2254072ef1f824c7d4c" id=8319855b-f781-4e04-9e6e-6e2e77cd9016 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	May 31 18:57:13 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:13.906612762Z" level=info msg="Stopped pod sandbox: ee0a5eedc7c6920e82a3f268b7db79005eccaacd615dd2254072ef1f824c7d4c" id=8319855b-f781-4e04-9e6e-6e2e77cd9016 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	May 31 18:57:14 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:14.312503062Z" level=info msg="Stopping pod sandbox: ee0a5eedc7c6920e82a3f268b7db79005eccaacd615dd2254072ef1f824c7d4c" id=bcda7636-4762-4f0e-ae0c-4cd23d2de6b6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	May 31 18:57:14 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:14.312560212Z" level=info msg="Stopped pod sandbox (already stopped): ee0a5eedc7c6920e82a3f268b7db79005eccaacd615dd2254072ef1f824c7d4c" id=bcda7636-4762-4f0e-ae0c-4cd23d2de6b6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	May 31 18:57:14 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:14.857684101Z" level=info msg="Stopping container: 28ad5b7c8b0807030c9e51e8ddb236685d13727c5f426ca471b2bc478d773735 (timeout: 2s)" id=b54f0491-730b-4680-819d-1403e683aec5 name=/runtime.v1alpha2.RuntimeService/StopContainer
	May 31 18:57:14 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:14.860537021Z" level=info msg="Stopping container: 28ad5b7c8b0807030c9e51e8ddb236685d13727c5f426ca471b2bc478d773735 (timeout: 2s)" id=9a6e0628-cb7e-4723-9892-3bebee762c00 name=/runtime.v1alpha2.RuntimeService/StopContainer
	May 31 18:57:16 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:16.867376280Z" level=warning msg="Stopping container 28ad5b7c8b0807030c9e51e8ddb236685d13727c5f426ca471b2bc478d773735 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=b54f0491-730b-4680-819d-1403e683aec5 name=/runtime.v1alpha2.RuntimeService/StopContainer
	May 31 18:57:16 ingress-addon-legacy-466444 conmon[3378]: conmon 28ad5b7c8b0807030c9e <ninfo>: container 3390 exited with status 137
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.029967698Z" level=info msg="Stopped container 28ad5b7c8b0807030c9e51e8ddb236685d13727c5f426ca471b2bc478d773735: ingress-nginx/ingress-nginx-controller-7fcf777cb7-4pcxm/controller" id=b54f0491-730b-4680-819d-1403e683aec5 name=/runtime.v1alpha2.RuntimeService/StopContainer
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.029996987Z" level=info msg="Stopped container 28ad5b7c8b0807030c9e51e8ddb236685d13727c5f426ca471b2bc478d773735: ingress-nginx/ingress-nginx-controller-7fcf777cb7-4pcxm/controller" id=9a6e0628-cb7e-4723-9892-3bebee762c00 name=/runtime.v1alpha2.RuntimeService/StopContainer
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.030556484Z" level=info msg="Stopping pod sandbox: 4a0995ef219c9c43b4b0c4dd60956b865c548bcb82f4927a1017f3914b660feb" id=57b5f36b-5d15-44d4-b54e-d8b02db95082 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.030661672Z" level=info msg="Stopping pod sandbox: 4a0995ef219c9c43b4b0c4dd60956b865c548bcb82f4927a1017f3914b660feb" id=8113a57a-a592-46bd-bedf-2e9f58af8c41 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.033397254Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-E6SNLXFDF3KKUTJ5 - [0:0]\n:KUBE-HP-ZCGSAMZL7WBKWSHO - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-E6SNLXFDF3KKUTJ5\n-X KUBE-HP-ZCGSAMZL7WBKWSHO\nCOMMIT\n"
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.034640796Z" level=info msg="Closing host port tcp:80"
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.034678958Z" level=info msg="Closing host port tcp:443"
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.035621110Z" level=info msg="Host port tcp:80 does not have an open socket"
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.035642457Z" level=info msg="Host port tcp:443 does not have an open socket"
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.035765525Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-4pcxm Namespace:ingress-nginx ID:4a0995ef219c9c43b4b0c4dd60956b865c548bcb82f4927a1017f3914b660feb UID:1c86b481-ba1a-4842-a99a-281b82c16e6e NetNS:/var/run/netns/33ac7058-f573-4e15-bd3e-63473a6363c3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.035874789Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-4pcxm from CNI network \"kindnet\" (type=ptp)"
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.077571814Z" level=info msg="Stopped pod sandbox: 4a0995ef219c9c43b4b0c4dd60956b865c548bcb82f4927a1017f3914b660feb" id=57b5f36b-5d15-44d4-b54e-d8b02db95082 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	May 31 18:57:17 ingress-addon-legacy-466444 crio[957]: time="2023-05-31 18:57:17.077694013Z" level=info msg="Stopped pod sandbox (already stopped): 4a0995ef219c9c43b4b0c4dd60956b865c548bcb82f4927a1017f3914b660feb" id=8113a57a-a592-46bd-bedf-2e9f58af8c41 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	71508b52cf062       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            22 seconds ago      Running             hello-world-app           0                   94320ac03fd56       hello-world-app-5f5d8b66bb-mcgr4
	7bf3239930e43       docker.io/library/nginx@sha256:0b0af14a00ea0e4fd9b09e77d2b89b71b5c5a97f9aa073553f355415bc34ae33                    2 minutes ago       Running             nginx                     0                   0c35c75c18774       nginx
	28ad5b7c8b080       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   4a0995ef219c9       ingress-nginx-controller-7fcf777cb7-4pcxm
	1e09ae9d2451c       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   340e2f8c7afe3       ingress-nginx-admission-patch-7s884
	2af9b8786650e       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   695a2f1b3d9ec       ingress-nginx-admission-create-rtfzd
	5c1bf950acf39       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   a757bcd98f0b9       coredns-66bff467f8-qwh7j
	d4edb9334eb48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   48c3dc0506ccd       storage-provisioner
	1baae75fef672       docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974                 3 minutes ago       Running             kindnet-cni               0                   5e4aa0bc07f7b       kindnet-dwbjx
	c60c500958e97       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   a1cd08ffcbeb4       kube-proxy-d2frz
	6025bb355423b       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   1fd08101e8e73       kube-apiserver-ingress-addon-legacy-466444
	7fb1f71d4785c       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   18bed6ca00578       kube-scheduler-ingress-addon-legacy-466444
	e5093734d9586       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   9df14fa065bf1       kube-controller-manager-ingress-addon-legacy-466444
	748cbf643213f       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   8a25f9d846d79       etcd-ingress-addon-legacy-466444
	
	* 
	* ==> coredns [5c1bf950acf3905697d8857198b7cb2a59a070293796b5c20d94aee1da64e68f] <==
	* [INFO] 10.244.0.5:47299 - 25511 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004858929s
	[INFO] 10.244.0.5:59924 - 17615 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004392697s
	[INFO] 10.244.0.5:53383 - 49511 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004286596s
	[INFO] 10.244.0.5:58435 - 9818 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004317443s
	[INFO] 10.244.0.5:59341 - 38750 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004330805s
	[INFO] 10.244.0.5:36237 - 46038 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004413178s
	[INFO] 10.244.0.5:47299 - 50324 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004276906s
	[INFO] 10.244.0.5:54333 - 25014 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00437707s
	[INFO] 10.244.0.5:37158 - 43875 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004468726s
	[INFO] 10.244.0.5:59924 - 1833 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005945724s
	[INFO] 10.244.0.5:47299 - 20513 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005943029s
	[INFO] 10.244.0.5:54333 - 40827 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005856615s
	[INFO] 10.244.0.5:37158 - 18118 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005961053s
	[INFO] 10.244.0.5:53383 - 39036 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006037839s
	[INFO] 10.244.0.5:54333 - 20733 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061471s
	[INFO] 10.244.0.5:37158 - 16574 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038903s
	[INFO] 10.244.0.5:59924 - 21289 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000220474s
	[INFO] 10.244.0.5:58435 - 15654 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006360739s
	[INFO] 10.244.0.5:47299 - 50091 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000205804s
	[INFO] 10.244.0.5:59341 - 31625 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006311372s
	[INFO] 10.244.0.5:36237 - 49159 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006295691s
	[INFO] 10.244.0.5:53383 - 1104 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000231189s
	[INFO] 10.244.0.5:58435 - 2679 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000088966s
	[INFO] 10.244.0.5:59341 - 33262 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071019s
	[INFO] 10.244.0.5:36237 - 7118 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059521s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-466444
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-466444
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140
	                    minikube.k8s.io/name=ingress-addon-legacy-466444
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_31T18_53_40_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 18:53:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-466444
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 18:57:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 18:57:10 +0000   Wed, 31 May 2023 18:53:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 18:57:10 +0000   Wed, 31 May 2023 18:53:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 18:57:10 +0000   Wed, 31 May 2023 18:53:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 18:57:10 +0000   Wed, 31 May 2023 18:54:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-466444
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b5b9b2484504ef28f11219d279563f2
	  System UUID:                378cf022-5ad2-4821-b612-6157d9086e71
	  Boot ID:                    858e553b-6392-44c5-a611-8f56a2b0fab6
	  Kernel Version:             5.15.0-1035-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-mcgr4                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-66bff467f8-qwh7j                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m28s
	  kube-system                 etcd-ingress-addon-legacy-466444                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kindnet-dwbjx                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m28s
	  kube-system                 kube-apiserver-ingress-addon-legacy-466444             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-466444    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 kube-proxy-d2frz                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-scheduler-ingress-addon-legacy-466444             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m50s (x5 over 3m50s)  kubelet     Node ingress-addon-legacy-466444 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x5 over 3m50s)  kubelet     Node ingress-addon-legacy-466444 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x4 over 3m50s)  kubelet     Node ingress-addon-legacy-466444 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m43s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m43s                  kubelet     Node ingress-addon-legacy-466444 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s                  kubelet     Node ingress-addon-legacy-466444 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s                  kubelet     Node ingress-addon-legacy-466444 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m27s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m22s                  kubelet     Node ingress-addon-legacy-466444 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004959] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.007966] FS-Cache: N-cookie d=0000000091dc95f5{9p.inode} n=00000000d3cdecde
	[  +0.008741] FS-Cache: N-key=[8] '74a00f0200000000'
	[  +0.313415] FS-Cache: Duplicate cookie detected
	[  +0.004687] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006753] FS-Cache: O-cookie d=0000000091dc95f5{9p.inode} n=000000009f8e728a
	[  +0.007402] FS-Cache: O-key=[8] '83a00f0200000000'
	[  +0.006311] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006577] FS-Cache: N-cookie d=0000000091dc95f5{9p.inode} n=00000000594058f6
	[  +0.007352] FS-Cache: N-key=[8] '83a00f0200000000'
	[ +19.428279] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[May31 18:54] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[  +1.028188] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[  +2.015837] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[  +4.255686] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[May31 18:55] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[ +16.126833] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[ +33.277509] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	
	* 
	* ==> etcd [748cbf643213f9b08b65c172c8448c666cddaac2b3394e075b0684e6d03c8fde] <==
	* raft2023/05/31 18:53:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-05-31 18:53:32.983491 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-05-31 18:53:32.985484 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-05-31 18:53:32.985657 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-05-31 18:53:32.985962 I | embed: listening for peers on 192.168.49.2:2380
	2023-05-31 18:53:32.986026 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/05/31 18:53:33 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/05/31 18:53:33 INFO: aec36adc501070cc became candidate at term 2
	raft2023/05/31 18:53:33 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/05/31 18:53:33 INFO: aec36adc501070cc became leader at term 2
	raft2023/05/31 18:53:33 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-05-31 18:53:33.474650 I | embed: ready to serve client requests
	2023-05-31 18:53:33.474806 I | etcdserver: published {Name:ingress-addon-legacy-466444 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-05-31 18:53:33.474883 I | embed: ready to serve client requests
	2023-05-31 18:53:33.474976 I | etcdserver: setting up the initial cluster version to 3.4
	2023-05-31 18:53:33.476572 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-05-31 18:53:33.476634 I | etcdserver/api: enabled capabilities for version 3.4
	2023-05-31 18:53:33.477122 I | embed: serving client requests on 192.168.49.2:2379
	2023-05-31 18:53:33.477437 I | embed: serving client requests on 127.0.0.1:2379
	2023-05-31 18:54:00.190942 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-ingress-addon-legacy-466444\" " with result "range_response_count:1 size:4788" took too long (195.552936ms) to execute
	2023-05-31 18:54:00.449057 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-66bff467f8-qwh7j.17644f3197f663e7\" " with result "range_response_count:1 size:829" took too long (252.016431ms) to execute
	2023-05-31 18:54:00.449098 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-ingress-addon-legacy-466444\" " with result "range_response_count:1 size:6682" took too long (250.755764ms) to execute
	2023-05-31 18:54:00.449135 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2792" took too long (252.204969ms) to execute
	2023-05-31 18:54:00.449156 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-qwh7j\" " with result "range_response_count:1 size:3753" took too long (247.128567ms) to execute
	2023-05-31 18:54:00.721751 W | etcdserver: request "header:<ID:8128021455678714322 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-ingress-addon-legacy-466444\" mod_revision:336 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-ingress-addon-legacy-466444\" value_size:6464 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-ingress-addon-legacy-466444\" > >>" with result "size:16" took too long (125.300886ms) to execute
	
	* 
	* ==> kernel <==
	*  18:57:22 up 39 min,  0 users,  load average: 0.19, 0.71, 0.54
	Linux ingress-addon-legacy-466444 5.15.0-1035-gcp #43~20.04.1-Ubuntu SMP Mon May 22 16:49:11 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [1baae75fef67254ffbf6c39a036e2013ab5a06048bfcc2574393e6eb94d437db] <==
	* I0531 18:55:18.198290       1 main.go:227] handling current node
	I0531 18:55:28.201268       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:55:28.201292       1 main.go:227] handling current node
	I0531 18:55:38.212352       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:55:38.212376       1 main.go:227] handling current node
	I0531 18:55:48.216906       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:55:48.216934       1 main.go:227] handling current node
	I0531 18:55:58.220270       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:55:58.220323       1 main.go:227] handling current node
	I0531 18:56:08.231286       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:56:08.231312       1 main.go:227] handling current node
	I0531 18:56:18.243360       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:56:18.243383       1 main.go:227] handling current node
	I0531 18:56:28.247146       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:56:28.247176       1 main.go:227] handling current node
	I0531 18:56:38.255244       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:56:38.255271       1 main.go:227] handling current node
	I0531 18:56:48.258576       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:56:48.258600       1 main.go:227] handling current node
	I0531 18:56:58.267506       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:56:58.267541       1 main.go:227] handling current node
	I0531 18:57:08.280026       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:57:08.280056       1 main.go:227] handling current node
	I0531 18:57:18.292264       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0531 18:57:18.292287       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6025bb355423b61baecbb57a66094c57821e48b494abc86a26cbc2625b952ca8] <==
	* I0531 18:53:36.529260       1 naming_controller.go:291] Starting NamingConditionController
	E0531 18:53:36.533615       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0531 18:53:36.641804       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 18:53:36.641837       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0531 18:53:36.641837       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0531 18:53:36.641881       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 18:53:36.641890       1 cache.go:39] Caches are synced for autoregister controller
	I0531 18:53:36.694489       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:53:37.527696       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0531 18:53:37.527733       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 18:53:37.532393       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0531 18:53:37.535075       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0531 18:53:37.535095       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0531 18:53:37.802024       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 18:53:37.828569       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0531 18:53:37.883043       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0531 18:53:37.883908       1 controller.go:609] quota admission added evaluator for: endpoints
	I0531 18:53:37.888969       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:53:38.814808       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0531 18:53:39.544289       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0531 18:53:39.711246       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0531 18:53:54.748338       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0531 18:53:54.945020       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0531 18:54:08.665804       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0531 18:54:38.232220       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [e5093734d9586a7405f082c97cd96a3ee7b22e587792d48103d0bcedb070eb88] <==
	* I0531 18:53:54.746825       1 shared_informer.go:230] Caches are synced for job 
	I0531 18:53:54.759777       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"5e34b18d-c787-428d-8062-1ac92551cd03", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-d2frz
	I0531 18:53:54.762600       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"2bbd4c49-8ffe-428f-a47c-afd1ffc0ef6f", APIVersion:"apps/v1", ResourceVersion:"227", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-dwbjx
	I0531 18:53:54.842195       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0531 18:53:54.847870       1 shared_informer.go:230] Caches are synced for resource quota 
	I0531 18:53:54.848739       1 shared_informer.go:230] Caches are synced for disruption 
	I0531 18:53:54.848754       1 disruption.go:339] Sending events to api server.
	I0531 18:53:54.872309       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0531 18:53:54.872337       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0531 18:53:54.941862       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0531 18:53:54.941892       1 shared_informer.go:230] Caches are synced for deployment 
	I0531 18:53:54.949269       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"4514a069-234b-49ee-9ac8-1f3d626712cb", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0531 18:53:54.963764       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"1bb5b9b7-8b8c-4c01-aa5b-75d3b65097cb", APIVersion:"apps/v1", ResourceVersion:"354", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-qwh7j
	I0531 18:53:55.794689       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0531 18:53:55.794726       1 shared_informer.go:230] Caches are synced for resource quota 
	I0531 18:54:04.742535       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0531 18:54:08.655893       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"09d7be88-c0a6-4246-a10e-9ec1191935c5", APIVersion:"apps/v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0531 18:54:08.661159       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3e13747d-8ebb-48d3-b140-1bebb7d6ddd5", APIVersion:"apps/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-4pcxm
	I0531 18:54:08.673020       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"77b5ec83-71f9-4b16-8b4d-c63f42dae74f", APIVersion:"batch/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-rtfzd
	I0531 18:54:08.754513       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b4065f43-2836-4c77-837c-fc80fcd35206", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-7s884
	I0531 18:54:11.999706       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b4065f43-2836-4c77-837c-fc80fcd35206", APIVersion:"batch/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0531 18:54:12.007113       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"77b5ec83-71f9-4b16-8b4d-c63f42dae74f", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0531 18:56:58.251098       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"66d59bed-1c0e-42a9-a203-f124ea6ce216", APIVersion:"apps/v1", ResourceVersion:"703", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0531 18:56:58.257431       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"9f695278-711a-43d7-9e38-7a72a3a92f44", APIVersion:"apps/v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-mcgr4
	E0531 18:57:19.582478       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-49ckl" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [c60c500958e97256359b8d31e1ce59399e412fb7e5c1869589e57df6b30001e8] <==
	* W0531 18:53:55.628063       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0531 18:53:55.634206       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0531 18:53:55.634229       1 server_others.go:186] Using iptables Proxier.
	I0531 18:53:55.634515       1 server.go:583] Version: v1.18.20
	I0531 18:53:55.635011       1 config.go:315] Starting service config controller
	I0531 18:53:55.635024       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0531 18:53:55.635167       1 config.go:133] Starting endpoints config controller
	I0531 18:53:55.635194       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0531 18:53:55.735354       1 shared_informer.go:230] Caches are synced for service config 
	I0531 18:53:55.735392       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [7fb1f71d4785cf8fc2b8e730fd93745a93ac989a475a3a4f034f662f4c88b2ec] <==
	* W0531 18:53:36.561031       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 18:53:36.561037       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 18:53:36.652942       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0531 18:53:36.652966       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0531 18:53:36.654915       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 18:53:36.654940       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 18:53:36.655278       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0531 18:53:36.655339       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0531 18:53:36.656504       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:53:36.657345       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:53:36.657350       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:53:36.657470       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:53:36.657676       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:53:36.657843       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:53:36.657895       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:53:36.657895       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:53:36.657925       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:53:36.657970       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:53:36.658059       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 18:53:36.658137       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:53:37.556758       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:53:37.559446       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:53:37.569963       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:53:37.583448       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0531 18:53:40.155121       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* May 31 18:56:40 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:56:40.905318    1877 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	May 31 18:56:40 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:56:40.905345    1877 pod_workers.go:191] Error syncing pod a1077373-478f-4a7a-8474-f4718a570687 ("kube-ingress-dns-minikube_kube-system(a1077373-478f-4a7a-8474-f4718a570687)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	May 31 18:56:54 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:56:54.905276    1877 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	May 31 18:56:54 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:56:54.905331    1877 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	May 31 18:56:54 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:56:54.905387    1877 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	May 31 18:56:54 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:56:54.905456    1877 pod_workers.go:191] Error syncing pod a1077373-478f-4a7a-8474-f4718a570687 ("kube-ingress-dns-minikube_kube-system(a1077373-478f-4a7a-8474-f4718a570687)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	May 31 18:56:58 ingress-addon-legacy-466444 kubelet[1877]: I0531 18:56:58.264201    1877 topology_manager.go:235] [topologymanager] Topology Admit Handler
	May 31 18:56:58 ingress-addon-legacy-466444 kubelet[1877]: I0531 18:56:58.460391    1877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-scmkx" (UniqueName: "kubernetes.io/secret/9862b019-055e-448a-b89a-648ba8ee8f17-default-token-scmkx") pod "hello-world-app-5f5d8b66bb-mcgr4" (UID: "9862b019-055e-448a-b89a-648ba8ee8f17")
	May 31 18:56:58 ingress-addon-legacy-466444 kubelet[1877]: W0531 18:56:58.896427    1877 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/5d78a8ba775bba118c5ff1e0afc5ae2e0a3c3dd9fcaf30c94e45d99a82495dc3/crio/crio-94320ac03fd56bdcdf4b9a3a4913b80c482bd19aa58d9829f87b1d3b0096a4e2 WatchSource:0}: Error finding container 94320ac03fd56bdcdf4b9a3a4913b80c482bd19aa58d9829f87b1d3b0096a4e2: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000ce6260 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	May 31 18:57:09 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:57:09.905661    1877 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	May 31 18:57:09 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:57:09.905700    1877 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	May 31 18:57:09 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:57:09.905776    1877 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	May 31 18:57:09 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:57:09.906108    1877 pod_workers.go:191] Error syncing pod a1077373-478f-4a7a-8474-f4718a570687 ("kube-ingress-dns-minikube_kube-system(a1077373-478f-4a7a-8474-f4718a570687)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	May 31 18:57:13 ingress-addon-legacy-466444 kubelet[1877]: I0531 18:57:13.795205    1877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-dwz9n" (UniqueName: "kubernetes.io/secret/a1077373-478f-4a7a-8474-f4718a570687-minikube-ingress-dns-token-dwz9n") pod "a1077373-478f-4a7a-8474-f4718a570687" (UID: "a1077373-478f-4a7a-8474-f4718a570687")
	May 31 18:57:13 ingress-addon-legacy-466444 kubelet[1877]: I0531 18:57:13.797163    1877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1077373-478f-4a7a-8474-f4718a570687-minikube-ingress-dns-token-dwz9n" (OuterVolumeSpecName: "minikube-ingress-dns-token-dwz9n") pod "a1077373-478f-4a7a-8474-f4718a570687" (UID: "a1077373-478f-4a7a-8474-f4718a570687"). InnerVolumeSpecName "minikube-ingress-dns-token-dwz9n". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 31 18:57:13 ingress-addon-legacy-466444 kubelet[1877]: I0531 18:57:13.895510    1877 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-dwz9n" (UniqueName: "kubernetes.io/secret/a1077373-478f-4a7a-8474-f4718a570687-minikube-ingress-dns-token-dwz9n") on node "ingress-addon-legacy-466444" DevicePath ""
	May 31 18:57:14 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:57:14.858838    1877 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-4pcxm.17644f60227a8bbb", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-4pcxm", UID:"1c86b481-ba1a-4842-a99a-281b82c16e6e", APIVersion:"v1", ResourceVersion:"464", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-466444"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1160402b319a7bb, ext:215346821333, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1160402b319a7bb, ext:215346821333, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-4pcxm.17644f60227a8bbb" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 31 18:57:14 ingress-addon-legacy-466444 kubelet[1877]: E0531 18:57:14.862990    1877 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-4pcxm.17644f60227a8bbb", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-4pcxm", UID:"1c86b481-ba1a-4842-a99a-281b82c16e6e", APIVersion:"v1", ResourceVersion:"464", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-466444"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1160402b319a7bb, ext:215346821333, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1160402b34674ac, ext:215349757379, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-4pcxm.17644f60227a8bbb" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	May 31 18:57:17 ingress-addon-legacy-466444 kubelet[1877]: W0531 18:57:17.307191    1877 pod_container_deletor.go:77] Container "4a0995ef219c9c43b4b0c4dd60956b865c548bcb82f4927a1017f3914b660feb" not found in pod's containers
	May 31 18:57:17 ingress-addon-legacy-466444 kubelet[1877]: I0531 18:57:17.804229    1877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1c86b481-ba1a-4842-a99a-281b82c16e6e-webhook-cert") pod "1c86b481-ba1a-4842-a99a-281b82c16e6e" (UID: "1c86b481-ba1a-4842-a99a-281b82c16e6e")
	May 31 18:57:17 ingress-addon-legacy-466444 kubelet[1877]: I0531 18:57:17.804270    1877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-dzcgj" (UniqueName: "kubernetes.io/secret/1c86b481-ba1a-4842-a99a-281b82c16e6e-ingress-nginx-token-dzcgj") pod "1c86b481-ba1a-4842-a99a-281b82c16e6e" (UID: "1c86b481-ba1a-4842-a99a-281b82c16e6e")
	May 31 18:57:17 ingress-addon-legacy-466444 kubelet[1877]: I0531 18:57:17.806157    1877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c86b481-ba1a-4842-a99a-281b82c16e6e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1c86b481-ba1a-4842-a99a-281b82c16e6e" (UID: "1c86b481-ba1a-4842-a99a-281b82c16e6e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 31 18:57:17 ingress-addon-legacy-466444 kubelet[1877]: I0531 18:57:17.806624    1877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c86b481-ba1a-4842-a99a-281b82c16e6e-ingress-nginx-token-dzcgj" (OuterVolumeSpecName: "ingress-nginx-token-dzcgj") pod "1c86b481-ba1a-4842-a99a-281b82c16e6e" (UID: "1c86b481-ba1a-4842-a99a-281b82c16e6e"). InnerVolumeSpecName "ingress-nginx-token-dzcgj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 31 18:57:17 ingress-addon-legacy-466444 kubelet[1877]: I0531 18:57:17.904552    1877 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1c86b481-ba1a-4842-a99a-281b82c16e6e-webhook-cert") on node "ingress-addon-legacy-466444" DevicePath ""
	May 31 18:57:17 ingress-addon-legacy-466444 kubelet[1877]: I0531 18:57:17.904588    1877 reconciler.go:319] Volume detached for volume "ingress-nginx-token-dzcgj" (UniqueName: "kubernetes.io/secret/1c86b481-ba1a-4842-a99a-281b82c16e6e-ingress-nginx-token-dzcgj") on node "ingress-addon-legacy-466444" DevicePath ""
	
	* 
	* ==> storage-provisioner [d4edb9334eb4811f3b27be07fb0309d10ba43a0a10ce712e6eb769b5924b5aca] <==
	* I0531 18:54:05.105847       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 18:54:05.114878       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 18:54:05.114923       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 18:54:05.120890       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 18:54:05.121083       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-466444_31b92c35-a16d-4f8a-8b7f-63f77c533c51!
	I0531 18:54:05.121133       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a9058845-304f-4bdb-a1b9-3f8f05082e35", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-466444_31b92c35-a16d-4f8a-8b7f-63f77c533c51 became leader
	I0531 18:54:05.221919       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-466444_31b92c35-a16d-4f8a-8b7f-63f77c533c51!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-466444 -n ingress-addon-legacy-466444
E0531 18:57:23.083723   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-466444 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (183.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-jsm9c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-jsm9c -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-jsm9c -- sh -c "ping -c 1 192.168.58.1": exit status 1 (173.235534ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-jsm9c): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-rvdrs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-rvdrs -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-rvdrs -- sh -c "ping -c 1 192.168.58.1": exit status 1 (155.290942ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-rvdrs): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-697136
helpers_test.go:235: (dbg) docker inspect multinode-697136:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe",
	        "Created": "2023-05-31T19:02:10.169096979Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 97995,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T19:02:10.439102473Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f246fffc476e503eec088cb85bddb7b217288054dd7e1375d4f95eca27f4bce3",
	        "ResolvConfPath": "/var/lib/docker/containers/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe/hosts",
	        "LogPath": "/var/lib/docker/containers/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe-json.log",
	        "Name": "/multinode-697136",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-697136:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-697136",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6e2c46eb82eaa5cef380a79e82c7a3e2b8a117459051a93403f4dda2f46756c1-init/diff:/var/lib/docker/overlay2/ff5bbba96769eca5d0c1a4ffdb04787b9f84aae4dcd4bc9929a365a3d058b20f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e2c46eb82eaa5cef380a79e82c7a3e2b8a117459051a93403f4dda2f46756c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e2c46eb82eaa5cef380a79e82c7a3e2b8a117459051a93403f4dda2f46756c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e2c46eb82eaa5cef380a79e82c7a3e2b8a117459051a93403f4dda2f46756c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-697136",
	                "Source": "/var/lib/docker/volumes/multinode-697136/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-697136",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-697136",
	                "name.minikube.sigs.k8s.io": "multinode-697136",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aeaabe53e1330731a804e5dcf6cb0056f6cf44c7f3ad7eb1b59b9e9140fabf11",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/aeaabe53e133",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-697136": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "319c771e8fa6",
	                        "multinode-697136"
	                    ],
	                    "NetworkID": "e74e442a15ebe0499852f6737ebfb697e140195cd03889990597780d713d3524",
	                    "EndpointID": "932f7aa1dbe21f405fe4424a1e28b3ca777419412e1d4a1c299a969f9c803c4d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-697136 -n multinode-697136
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-697136 logs -n 25: (1.187759634s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-419339                           | mount-start-2-419339 | jenkins | v1.30.1 | 31 May 23 19:01 UTC | 31 May 23 19:01 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-419339 ssh -- ls                    | mount-start-2-419339 | jenkins | v1.30.1 | 31 May 23 19:01 UTC | 31 May 23 19:01 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-404412                           | mount-start-1-404412 | jenkins | v1.30.1 | 31 May 23 19:01 UTC | 31 May 23 19:01 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-419339 ssh -- ls                    | mount-start-2-419339 | jenkins | v1.30.1 | 31 May 23 19:01 UTC | 31 May 23 19:01 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-419339                           | mount-start-2-419339 | jenkins | v1.30.1 | 31 May 23 19:01 UTC | 31 May 23 19:01 UTC |
	| start   | -p mount-start-2-419339                           | mount-start-2-419339 | jenkins | v1.30.1 | 31 May 23 19:01 UTC | 31 May 23 19:02 UTC |
	| ssh     | mount-start-2-419339 ssh -- ls                    | mount-start-2-419339 | jenkins | v1.30.1 | 31 May 23 19:02 UTC | 31 May 23 19:02 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-419339                           | mount-start-2-419339 | jenkins | v1.30.1 | 31 May 23 19:02 UTC | 31 May 23 19:02 UTC |
	| delete  | -p mount-start-1-404412                           | mount-start-1-404412 | jenkins | v1.30.1 | 31 May 23 19:02 UTC | 31 May 23 19:02 UTC |
	| start   | -p multinode-697136                               | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:02 UTC | 31 May 23 19:03 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- apply -f                   | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- rollout                    | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- get pods -o                | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- get pods -o                | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- exec                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | busybox-67b7f59bb-jsm9c --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- exec                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | busybox-67b7f59bb-rvdrs --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- exec                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | busybox-67b7f59bb-jsm9c --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- exec                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | busybox-67b7f59bb-rvdrs --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- exec                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | busybox-67b7f59bb-jsm9c -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- exec                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | busybox-67b7f59bb-rvdrs -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- get pods -o                | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- exec                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | busybox-67b7f59bb-jsm9c                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- exec                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC |                     |
	|         | busybox-67b7f59bb-jsm9c -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- exec                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC | 31 May 23 19:03 UTC |
	|         | busybox-67b7f59bb-rvdrs                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-697136 -- exec                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:03 UTC |                     |
	|         | busybox-67b7f59bb-rvdrs -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 19:02:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:02:04.327466   97386 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:02:04.327593   97386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:02:04.327602   97386 out.go:309] Setting ErrFile to fd 2...
	I0531 19:02:04.327609   97386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:02:04.327729   97386 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	I0531 19:02:04.328314   97386 out.go:303] Setting JSON to false
	I0531 19:02:04.329360   97386 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2673,"bootTime":1685557051,"procs":402,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 19:02:04.329458   97386 start.go:137] virtualization: kvm guest
	I0531 19:02:04.332101   97386 out.go:177] * [multinode-697136] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 19:02:04.333844   97386 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:02:04.333909   97386 notify.go:220] Checking for updates...
	I0531 19:02:04.335653   97386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:02:04.337449   97386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:02:04.339131   97386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	I0531 19:02:04.340701   97386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 19:02:04.342481   97386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:02:04.344584   97386 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:02:04.367548   97386 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:02:04.367654   97386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:02:04.412946   97386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-05-31 19:02:04.403976055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 19:02:04.413038   97386 docker.go:294] overlay module found
	I0531 19:02:04.416625   97386 out.go:177] * Using the docker driver based on user configuration
	I0531 19:02:04.418441   97386 start.go:297] selected driver: docker
	I0531 19:02:04.418453   97386 start.go:875] validating driver "docker" against <nil>
	I0531 19:02:04.418463   97386 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:02:04.419222   97386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:02:04.465982   97386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-05-31 19:02:04.457683529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 19:02:04.466125   97386 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0531 19:02:04.466311   97386 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:02:04.468456   97386 out.go:177] * Using Docker driver with root privileges
	I0531 19:02:04.470198   97386 cni.go:84] Creating CNI manager for ""
	I0531 19:02:04.470209   97386 cni.go:136] 0 nodes found, recommending kindnet
	I0531 19:02:04.470217   97386 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 19:02:04.470225   97386 start_flags.go:319] config:
	{Name:multinode-697136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-697136 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:02:04.472981   97386 out.go:177] * Starting control plane node multinode-697136 in cluster multinode-697136
	I0531 19:02:04.474741   97386 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:02:04.476522   97386 out.go:177] * Pulling base image ...
	I0531 19:02:04.478254   97386 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:02:04.478286   97386 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4
	I0531 19:02:04.478293   97386 cache.go:57] Caching tarball of preloaded images
	I0531 19:02:04.478346   97386 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 19:02:04.478365   97386 preload.go:174] Found /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 19:02:04.478373   97386 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0531 19:02:04.478648   97386 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/config.json ...
	I0531 19:02:04.478666   97386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/config.json: {Name:mk7912b7cbf41d0c47575f1438f22a8c1b613f84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:02:04.495263   97386 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 19:02:04.495292   97386 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0531 19:02:04.495320   97386 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:02:04.495349   97386 start.go:364] acquiring machines lock for multinode-697136: {Name:mk5202e3e4cb10a3eaeda28917b6009d33c066b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:02:04.495455   97386 start.go:368] acquired machines lock for "multinode-697136" in 86.161µs
	I0531 19:02:04.495479   97386 start.go:93] Provisioning new machine with config: &{Name:multinode-697136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-697136 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 19:02:04.495559   97386 start.go:125] createHost starting for "" (driver="docker")
	I0531 19:02:04.497941   97386 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 19:02:04.498155   97386 start.go:159] libmachine.API.Create for "multinode-697136" (driver="docker")
	I0531 19:02:04.498178   97386 client.go:168] LocalClient.Create starting
	I0531 19:02:04.498249   97386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem
	I0531 19:02:04.498285   97386 main.go:141] libmachine: Decoding PEM data...
	I0531 19:02:04.498302   97386 main.go:141] libmachine: Parsing certificate...
	I0531 19:02:04.498371   97386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem
	I0531 19:02:04.498391   97386 main.go:141] libmachine: Decoding PEM data...
	I0531 19:02:04.498399   97386 main.go:141] libmachine: Parsing certificate...
	I0531 19:02:04.498690   97386 cli_runner.go:164] Run: docker network inspect multinode-697136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0531 19:02:04.513659   97386 cli_runner.go:211] docker network inspect multinode-697136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0531 19:02:04.513714   97386 network_create.go:281] running [docker network inspect multinode-697136] to gather additional debugging logs...
	I0531 19:02:04.513729   97386 cli_runner.go:164] Run: docker network inspect multinode-697136
	W0531 19:02:04.529006   97386 cli_runner.go:211] docker network inspect multinode-697136 returned with exit code 1
	I0531 19:02:04.529032   97386 network_create.go:284] error running [docker network inspect multinode-697136]: docker network inspect multinode-697136: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-697136 not found
	I0531 19:02:04.529042   97386 network_create.go:286] output of [docker network inspect multinode-697136]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-697136 not found
	
	** /stderr **
	I0531 19:02:04.529089   97386 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:02:04.544695   97386 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7ab169e4b338 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:97:96:ba:b9} reservation:<nil>}
	I0531 19:02:04.545182   97386 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010c58a0}
	I0531 19:02:04.545207   97386 network_create.go:123] attempt to create docker network multinode-697136 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0531 19:02:04.545256   97386 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-697136 multinode-697136
	I0531 19:02:04.598817   97386 network_create.go:107] docker network multinode-697136 192.168.58.0/24 created
	I0531 19:02:04.598844   97386 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-697136" container
	I0531 19:02:04.598895   97386 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:02:04.614225   97386 cli_runner.go:164] Run: docker volume create multinode-697136 --label name.minikube.sigs.k8s.io=multinode-697136 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:02:04.631242   97386 oci.go:103] Successfully created a docker volume multinode-697136
	I0531 19:02:04.631314   97386 cli_runner.go:164] Run: docker run --rm --name multinode-697136-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-697136 --entrypoint /usr/bin/test -v multinode-697136:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0531 19:02:05.132520   97386 oci.go:107] Successfully prepared a docker volume multinode-697136
	I0531 19:02:05.132555   97386 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:02:05.132576   97386 kic.go:190] Starting extracting preloaded images to volume ...
	I0531 19:02:05.132639   97386 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-697136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:02:10.107722   97386 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-697136:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.975037477s)
	I0531 19:02:10.107755   97386 kic.go:199] duration metric: took 4.975176 seconds to extract preloaded images to volume
	W0531 19:02:10.107953   97386 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 19:02:10.108045   97386 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 19:02:10.154516   97386 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-697136 --name multinode-697136 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-697136 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-697136 --network multinode-697136 --ip 192.168.58.2 --volume multinode-697136:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0531 19:02:10.448172   97386 cli_runner.go:164] Run: docker container inspect multinode-697136 --format={{.State.Running}}
	I0531 19:02:10.465142   97386 cli_runner.go:164] Run: docker container inspect multinode-697136 --format={{.State.Status}}
	I0531 19:02:10.482888   97386 cli_runner.go:164] Run: docker exec multinode-697136 stat /var/lib/dpkg/alternatives/iptables
	I0531 19:02:10.553313   97386 oci.go:144] the created container "multinode-697136" has a running status.
	I0531 19:02:10.553348   97386 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa...
	I0531 19:02:10.808392   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0531 19:02:10.808462   97386 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 19:02:10.832057   97386 cli_runner.go:164] Run: docker container inspect multinode-697136 --format={{.State.Status}}
	I0531 19:02:10.850589   97386 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 19:02:10.850611   97386 kic_runner.go:114] Args: [docker exec --privileged multinode-697136 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 19:02:10.954174   97386 cli_runner.go:164] Run: docker container inspect multinode-697136 --format={{.State.Status}}
	I0531 19:02:10.973740   97386 machine.go:88] provisioning docker machine ...
	I0531 19:02:10.973775   97386 ubuntu.go:169] provisioning hostname "multinode-697136"
	I0531 19:02:10.973833   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:02:10.991544   97386 main.go:141] libmachine: Using SSH client type: native
	I0531 19:02:10.992129   97386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0531 19:02:10.992156   97386 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-697136 && echo "multinode-697136" | sudo tee /etc/hostname
	I0531 19:02:11.150964   97386 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-697136
	
	I0531 19:02:11.151040   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:02:11.167397   97386 main.go:141] libmachine: Using SSH client type: native
	I0531 19:02:11.167840   97386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0531 19:02:11.167861   97386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-697136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-697136/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-697136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:02:11.280123   97386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:02:11.280151   97386 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-7270/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-7270/.minikube}
	I0531 19:02:11.280176   97386 ubuntu.go:177] setting up certificates
	I0531 19:02:11.280186   97386 provision.go:83] configureAuth start
	I0531 19:02:11.280231   97386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-697136
	I0531 19:02:11.296240   97386 provision.go:138] copyHostCerts
	I0531 19:02:11.296273   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem
	I0531 19:02:11.296321   97386 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem, removing ...
	I0531 19:02:11.296335   97386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem
	I0531 19:02:11.296400   97386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem (1078 bytes)
	I0531 19:02:11.296471   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem
	I0531 19:02:11.296489   97386 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem, removing ...
	I0531 19:02:11.296495   97386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem
	I0531 19:02:11.296517   97386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem (1123 bytes)
	I0531 19:02:11.296558   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem
	I0531 19:02:11.296582   97386 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem, removing ...
	I0531 19:02:11.296588   97386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem
	I0531 19:02:11.296607   97386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem (1675 bytes)
	I0531 19:02:11.296662   97386 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem org=jenkins.multinode-697136 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-697136]
	I0531 19:02:11.389122   97386 provision.go:172] copyRemoteCerts
	I0531 19:02:11.389177   97386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:02:11.389215   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:02:11.405288   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa Username:docker}
	I0531 19:02:11.492446   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 19:02:11.492501   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:02:11.512754   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 19:02:11.512820   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0531 19:02:11.532948   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 19:02:11.533007   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 19:02:11.553088   97386 provision.go:86] duration metric: configureAuth took 272.888838ms
	I0531 19:02:11.553119   97386 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:02:11.553275   97386 config.go:182] Loaded profile config "multinode-697136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:02:11.553370   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:02:11.569201   97386 main.go:141] libmachine: Using SSH client type: native
	I0531 19:02:11.569754   97386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0531 19:02:11.569780   97386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:02:11.762212   97386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:02:11.762237   97386 machine.go:91] provisioned docker machine in 788.477081ms
	I0531 19:02:11.762246   97386 client.go:171] LocalClient.Create took 7.264063421s
	I0531 19:02:11.762261   97386 start.go:167] duration metric: libmachine.API.Create for "multinode-697136" took 7.264107758s
	I0531 19:02:11.762268   97386 start.go:300] post-start starting for "multinode-697136" (driver="docker")
	I0531 19:02:11.762275   97386 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:02:11.762336   97386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:02:11.762395   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:02:11.778250   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa Username:docker}
	I0531 19:02:11.861356   97386 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:02:11.864203   97386 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0531 19:02:11.864221   97386 command_runner.go:130] > NAME="Ubuntu"
	I0531 19:02:11.864228   97386 command_runner.go:130] > VERSION_ID="22.04"
	I0531 19:02:11.864235   97386 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0531 19:02:11.864242   97386 command_runner.go:130] > VERSION_CODENAME=jammy
	I0531 19:02:11.864247   97386 command_runner.go:130] > ID=ubuntu
	I0531 19:02:11.864253   97386 command_runner.go:130] > ID_LIKE=debian
	I0531 19:02:11.864259   97386 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0531 19:02:11.864267   97386 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0531 19:02:11.864281   97386 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0531 19:02:11.864310   97386 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0531 19:02:11.864320   97386 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0531 19:02:11.864391   97386 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:02:11.864425   97386 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:02:11.864444   97386 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:02:11.864456   97386 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 19:02:11.864470   97386 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/addons for local assets ...
	I0531 19:02:11.864530   97386 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/files for local assets ...
	I0531 19:02:11.864616   97386 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> 142322.pem in /etc/ssl/certs
	I0531 19:02:11.864627   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> /etc/ssl/certs/142322.pem
	I0531 19:02:11.864728   97386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:02:11.872107   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem --> /etc/ssl/certs/142322.pem (1708 bytes)
	I0531 19:02:11.893066   97386 start.go:303] post-start completed in 130.784168ms
	I0531 19:02:11.893413   97386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-697136
	I0531 19:02:11.909207   97386 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/config.json ...
	I0531 19:02:11.909480   97386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:02:11.909535   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:02:11.925535   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa Username:docker}
	I0531 19:02:12.004739   97386 command_runner.go:130] > 17%!
	(MISSING)I0531 19:02:12.004818   97386 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:02:12.008641   97386 command_runner.go:130] > 244G
	I0531 19:02:12.008664   97386 start.go:128] duration metric: createHost completed in 7.513097766s
	I0531 19:02:12.008673   97386 start.go:83] releasing machines lock for "multinode-697136", held for 7.513206027s
	I0531 19:02:12.008723   97386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-697136
	I0531 19:02:12.024739   97386 ssh_runner.go:195] Run: cat /version.json
	I0531 19:02:12.024789   97386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:02:12.024797   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:02:12.024856   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:02:12.040666   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa Username:docker}
	I0531 19:02:12.041595   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa Username:docker}
	I0531 19:02:12.205057   97386 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0531 19:02:12.205116   97386 command_runner.go:130] > {"iso_version": "v1.30.1-1684885329-16572", "kicbase_version": "v0.0.39-1685034446-16582", "minikube_version": "v1.30.1", "commit": "9bed7441264a4ae8022c57b970940d4a22d9373a"}
	I0531 19:02:12.205211   97386 ssh_runner.go:195] Run: systemctl --version
	I0531 19:02:12.209076   97386 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0531 19:02:12.209111   97386 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0531 19:02:12.209202   97386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:02:12.345001   97386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:02:12.348756   97386 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0531 19:02:12.348783   97386 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0531 19:02:12.348792   97386 command_runner.go:130] > Device: 33h/51d	Inode: 792944      Links: 1
	I0531 19:02:12.348800   97386 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:02:12.348808   97386 command_runner.go:130] > Access: 2023-04-04 14:31:21.000000000 +0000
	I0531 19:02:12.348815   97386 command_runner.go:130] > Modify: 2023-04-04 14:31:21.000000000 +0000
	I0531 19:02:12.348822   97386 command_runner.go:130] > Change: 2023-05-31 18:43:50.527806978 +0000
	I0531 19:02:12.348830   97386 command_runner.go:130] >  Birth: 2023-05-31 18:43:50.527806978 +0000
	I0531 19:02:12.348974   97386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:02:12.365975   97386 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 19:02:12.366067   97386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:02:12.391888   97386 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0531 19:02:12.391925   97386 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0531 19:02:12.391932   97386 start.go:481] detecting cgroup driver to use...
	I0531 19:02:12.391965   97386 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 19:02:12.392001   97386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:02:12.404360   97386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:02:12.413647   97386 docker.go:193] disabling cri-docker service (if available) ...
	I0531 19:02:12.413690   97386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:02:12.425094   97386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:02:12.437307   97386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:02:12.506468   97386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:02:12.518732   97386 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0531 19:02:12.583320   97386 docker.go:209] disabling docker service ...
	I0531 19:02:12.583374   97386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:02:12.599852   97386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:02:12.609561   97386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:02:12.682643   97386 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0531 19:02:12.682706   97386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:02:12.767937   97386 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0531 19:02:12.768017   97386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:02:12.777983   97386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:02:12.791720   97386 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0531 19:02:12.792538   97386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 19:02:12.792589   97386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:02:12.800949   97386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:02:12.801016   97386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:02:12.809181   97386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:02:12.817090   97386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:02:12.825294   97386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:02:12.833026   97386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:02:12.839430   97386 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0531 19:02:12.839993   97386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:02:12.847540   97386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:02:12.927760   97386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:02:13.032740   97386 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:02:13.032805   97386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:02:13.035956   97386 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0531 19:02:13.035978   97386 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0531 19:02:13.035991   97386 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0531 19:02:13.036000   97386 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:02:13.036009   97386 command_runner.go:130] > Access: 2023-05-31 19:02:13.016842667 +0000
	I0531 19:02:13.036018   97386 command_runner.go:130] > Modify: 2023-05-31 19:02:13.016842667 +0000
	I0531 19:02:13.036028   97386 command_runner.go:130] > Change: 2023-05-31 19:02:13.016842667 +0000
	I0531 19:02:13.036038   97386 command_runner.go:130] >  Birth: -
	I0531 19:02:13.036059   97386 start.go:549] Will wait 60s for crictl version
	I0531 19:02:13.036103   97386 ssh_runner.go:195] Run: which crictl
	I0531 19:02:13.038854   97386 command_runner.go:130] > /usr/bin/crictl
	I0531 19:02:13.038950   97386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:02:13.069998   97386 command_runner.go:130] > Version:  0.1.0
	I0531 19:02:13.070025   97386 command_runner.go:130] > RuntimeName:  cri-o
	I0531 19:02:13.070030   97386 command_runner.go:130] > RuntimeVersion:  1.24.5
	I0531 19:02:13.070035   97386 command_runner.go:130] > RuntimeApiVersion:  v1
	I0531 19:02:13.070049   97386 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 19:02:13.070108   97386 ssh_runner.go:195] Run: crio --version
	I0531 19:02:13.101658   97386 command_runner.go:130] > crio version 1.24.5
	I0531 19:02:13.101675   97386 command_runner.go:130] > Version:          1.24.5
	I0531 19:02:13.101683   97386 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0531 19:02:13.101687   97386 command_runner.go:130] > GitTreeState:     clean
	I0531 19:02:13.101692   97386 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0531 19:02:13.101696   97386 command_runner.go:130] > GoVersion:        go1.18.2
	I0531 19:02:13.101701   97386 command_runner.go:130] > Compiler:         gc
	I0531 19:02:13.101705   97386 command_runner.go:130] > Platform:         linux/amd64
	I0531 19:02:13.101710   97386 command_runner.go:130] > Linkmode:         dynamic
	I0531 19:02:13.101720   97386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0531 19:02:13.101729   97386 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:02:13.101736   97386 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:02:13.101797   97386 ssh_runner.go:195] Run: crio --version
	I0531 19:02:13.131521   97386 command_runner.go:130] > crio version 1.24.5
	I0531 19:02:13.131543   97386 command_runner.go:130] > Version:          1.24.5
	I0531 19:02:13.131553   97386 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0531 19:02:13.131559   97386 command_runner.go:130] > GitTreeState:     clean
	I0531 19:02:13.131567   97386 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0531 19:02:13.131575   97386 command_runner.go:130] > GoVersion:        go1.18.2
	I0531 19:02:13.131582   97386 command_runner.go:130] > Compiler:         gc
	I0531 19:02:13.131592   97386 command_runner.go:130] > Platform:         linux/amd64
	I0531 19:02:13.131602   97386 command_runner.go:130] > Linkmode:         dynamic
	I0531 19:02:13.131618   97386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0531 19:02:13.131629   97386 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:02:13.131639   97386 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:02:13.136481   97386 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0531 19:02:13.138188   97386 cli_runner.go:164] Run: docker network inspect multinode-697136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:02:13.153771   97386 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0531 19:02:13.157033   97386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:02:13.166475   97386 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:02:13.166528   97386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:02:13.210965   97386 command_runner.go:130] > {
	I0531 19:02:13.210990   97386 command_runner.go:130] >   "images": [
	I0531 19:02:13.210996   97386 command_runner.go:130] >     {
	I0531 19:02:13.211009   97386 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0531 19:02:13.211016   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.211028   97386 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0531 19:02:13.211033   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211038   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.211066   97386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0531 19:02:13.211085   97386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0531 19:02:13.211095   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211105   97386 command_runner.go:130] >       "size": "65249302",
	I0531 19:02:13.211114   97386 command_runner.go:130] >       "uid": null,
	I0531 19:02:13.211124   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.211130   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.211136   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.211140   97386 command_runner.go:130] >     },
	I0531 19:02:13.211146   97386 command_runner.go:130] >     {
	I0531 19:02:13.211152   97386 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0531 19:02:13.211158   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.211163   97386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0531 19:02:13.211170   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211173   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.211183   97386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0531 19:02:13.211192   97386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0531 19:02:13.211198   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211205   97386 command_runner.go:130] >       "size": "31470524",
	I0531 19:02:13.211211   97386 command_runner.go:130] >       "uid": null,
	I0531 19:02:13.211215   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.211221   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.211226   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.211231   97386 command_runner.go:130] >     },
	I0531 19:02:13.211235   97386 command_runner.go:130] >     {
	I0531 19:02:13.211245   97386 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0531 19:02:13.211252   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.211259   97386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0531 19:02:13.211265   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211270   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.211279   97386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0531 19:02:13.211288   97386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0531 19:02:13.211293   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211299   97386 command_runner.go:130] >       "size": "53621675",
	I0531 19:02:13.211303   97386 command_runner.go:130] >       "uid": null,
	I0531 19:02:13.211309   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.211313   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.211320   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.211323   97386 command_runner.go:130] >     },
	I0531 19:02:13.211329   97386 command_runner.go:130] >     {
	I0531 19:02:13.211335   97386 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0531 19:02:13.211342   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.211347   97386 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0531 19:02:13.211352   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211357   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.211365   97386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0531 19:02:13.211374   97386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0531 19:02:13.211383   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211389   97386 command_runner.go:130] >       "size": "297083935",
	I0531 19:02:13.211393   97386 command_runner.go:130] >       "uid": {
	I0531 19:02:13.211407   97386 command_runner.go:130] >         "value": "0"
	I0531 19:02:13.211414   97386 command_runner.go:130] >       },
	I0531 19:02:13.211418   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.211425   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.211429   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.211435   97386 command_runner.go:130] >     },
	I0531 19:02:13.211438   97386 command_runner.go:130] >     {
	I0531 19:02:13.211447   97386 command_runner.go:130] >       "id": "c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370",
	I0531 19:02:13.211453   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.211458   97386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.2"
	I0531 19:02:13.211464   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211468   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.211475   97386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9",
	I0531 19:02:13.211484   97386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:95388fe585f1d6f65d414678042a281f80593e78cabaeeb8520a0873ebbb54f2"
	I0531 19:02:13.211490   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211494   97386 command_runner.go:130] >       "size": "122053574",
	I0531 19:02:13.211500   97386 command_runner.go:130] >       "uid": {
	I0531 19:02:13.211504   97386 command_runner.go:130] >         "value": "0"
	I0531 19:02:13.211510   97386 command_runner.go:130] >       },
	I0531 19:02:13.211515   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.211521   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.211525   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.211531   97386 command_runner.go:130] >     },
	I0531 19:02:13.211538   97386 command_runner.go:130] >     {
	I0531 19:02:13.211544   97386 command_runner.go:130] >       "id": "ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12",
	I0531 19:02:13.211551   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.211556   97386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.2"
	I0531 19:02:13.211562   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211566   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.211575   97386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:279461bc1c0b4753dc83677a927b9f7827012b3adbcaa5df9dfd4af8b0987bc6",
	I0531 19:02:13.211586   97386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"
	I0531 19:02:13.211593   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211597   97386 command_runner.go:130] >       "size": "113906988",
	I0531 19:02:13.211603   97386 command_runner.go:130] >       "uid": {
	I0531 19:02:13.211607   97386 command_runner.go:130] >         "value": "0"
	I0531 19:02:13.211613   97386 command_runner.go:130] >       },
	I0531 19:02:13.211617   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.211623   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.211627   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.211632   97386 command_runner.go:130] >     },
	I0531 19:02:13.211636   97386 command_runner.go:130] >     {
	I0531 19:02:13.211644   97386 command_runner.go:130] >       "id": "b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee",
	I0531 19:02:13.211648   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.211653   97386 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.2"
	I0531 19:02:13.211658   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211662   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.211672   97386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f",
	I0531 19:02:13.211680   97386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:931b8fa2393b7e2a926afbfd24784153760b999ddbf2059f2cb652510ecdef83"
	I0531 19:02:13.211686   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211690   97386 command_runner.go:130] >       "size": "72709527",
	I0531 19:02:13.211696   97386 command_runner.go:130] >       "uid": null,
	I0531 19:02:13.211700   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.211706   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.211710   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.211716   97386 command_runner.go:130] >     },
	I0531 19:02:13.211720   97386 command_runner.go:130] >     {
	I0531 19:02:13.211728   97386 command_runner.go:130] >       "id": "89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0",
	I0531 19:02:13.211734   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.211740   97386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.2"
	I0531 19:02:13.211745   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211750   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.211801   97386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177",
	I0531 19:02:13.211818   97386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f8be7505892d1671a15afa3ac6c3b31e50da87dd59a4745e30a5b3f9f584ee6e"
	I0531 19:02:13.211822   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211826   97386 command_runner.go:130] >       "size": "59802924",
	I0531 19:02:13.211830   97386 command_runner.go:130] >       "uid": {
	I0531 19:02:13.211834   97386 command_runner.go:130] >         "value": "0"
	I0531 19:02:13.211840   97386 command_runner.go:130] >       },
	I0531 19:02:13.211847   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.211853   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.211862   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.211869   97386 command_runner.go:130] >     },
	I0531 19:02:13.211873   97386 command_runner.go:130] >     {
	I0531 19:02:13.211883   97386 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0531 19:02:13.211893   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.211901   97386 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0531 19:02:13.211910   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211917   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.211927   97386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0531 19:02:13.211936   97386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0531 19:02:13.211940   97386 command_runner.go:130] >       ],
	I0531 19:02:13.211946   97386 command_runner.go:130] >       "size": "750414",
	I0531 19:02:13.211950   97386 command_runner.go:130] >       "uid": {
	I0531 19:02:13.211956   97386 command_runner.go:130] >         "value": "65535"
	I0531 19:02:13.211960   97386 command_runner.go:130] >       },
	I0531 19:02:13.211966   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.211971   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.211977   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.211986   97386 command_runner.go:130] >     }
	I0531 19:02:13.211995   97386 command_runner.go:130] >   ]
	I0531 19:02:13.212003   97386 command_runner.go:130] > }
	I0531 19:02:13.213200   97386 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 19:02:13.213218   97386 crio.go:415] Images already preloaded, skipping extraction
	I0531 19:02:13.213288   97386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:02:13.242574   97386 command_runner.go:130] > {
	I0531 19:02:13.242600   97386 command_runner.go:130] >   "images": [
	I0531 19:02:13.242607   97386 command_runner.go:130] >     {
	I0531 19:02:13.242621   97386 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0531 19:02:13.242630   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.242637   97386 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0531 19:02:13.242641   97386 command_runner.go:130] >       ],
	I0531 19:02:13.242646   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.242663   97386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0531 19:02:13.242682   97386 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0531 19:02:13.242689   97386 command_runner.go:130] >       ],
	I0531 19:02:13.242698   97386 command_runner.go:130] >       "size": "65249302",
	I0531 19:02:13.242711   97386 command_runner.go:130] >       "uid": null,
	I0531 19:02:13.242722   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.242733   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.242741   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.242745   97386 command_runner.go:130] >     },
	I0531 19:02:13.242757   97386 command_runner.go:130] >     {
	I0531 19:02:13.242764   97386 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0531 19:02:13.242771   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.242776   97386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0531 19:02:13.242780   97386 command_runner.go:130] >       ],
	I0531 19:02:13.242784   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.242792   97386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0531 19:02:13.242799   97386 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0531 19:02:13.242803   97386 command_runner.go:130] >       ],
	I0531 19:02:13.242811   97386 command_runner.go:130] >       "size": "31470524",
	I0531 19:02:13.242816   97386 command_runner.go:130] >       "uid": null,
	I0531 19:02:13.242820   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.242828   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.242833   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.242836   97386 command_runner.go:130] >     },
	I0531 19:02:13.242845   97386 command_runner.go:130] >     {
	I0531 19:02:13.242851   97386 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0531 19:02:13.242858   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.242870   97386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0531 19:02:13.242877   97386 command_runner.go:130] >       ],
	I0531 19:02:13.242882   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.242892   97386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0531 19:02:13.242899   97386 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0531 19:02:13.242906   97386 command_runner.go:130] >       ],
	I0531 19:02:13.242910   97386 command_runner.go:130] >       "size": "53621675",
	I0531 19:02:13.242914   97386 command_runner.go:130] >       "uid": null,
	I0531 19:02:13.242918   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.242922   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.242928   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.242932   97386 command_runner.go:130] >     },
	I0531 19:02:13.242936   97386 command_runner.go:130] >     {
	I0531 19:02:13.242942   97386 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0531 19:02:13.242949   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.242956   97386 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0531 19:02:13.242963   97386 command_runner.go:130] >       ],
	I0531 19:02:13.242967   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.242978   97386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0531 19:02:13.242988   97386 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0531 19:02:13.243004   97386 command_runner.go:130] >       ],
	I0531 19:02:13.243016   97386 command_runner.go:130] >       "size": "297083935",
	I0531 19:02:13.243023   97386 command_runner.go:130] >       "uid": {
	I0531 19:02:13.243028   97386 command_runner.go:130] >         "value": "0"
	I0531 19:02:13.243032   97386 command_runner.go:130] >       },
	I0531 19:02:13.243037   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.243044   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.243048   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.243054   97386 command_runner.go:130] >     },
	I0531 19:02:13.243058   97386 command_runner.go:130] >     {
	I0531 19:02:13.243064   97386 command_runner.go:130] >       "id": "c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370",
	I0531 19:02:13.243070   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.243076   97386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.2"
	I0531 19:02:13.243082   97386 command_runner.go:130] >       ],
	I0531 19:02:13.243086   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.243095   97386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9",
	I0531 19:02:13.243108   97386 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:95388fe585f1d6f65d414678042a281f80593e78cabaeeb8520a0873ebbb54f2"
	I0531 19:02:13.243115   97386 command_runner.go:130] >       ],
	I0531 19:02:13.243119   97386 command_runner.go:130] >       "size": "122053574",
	I0531 19:02:13.243126   97386 command_runner.go:130] >       "uid": {
	I0531 19:02:13.243130   97386 command_runner.go:130] >         "value": "0"
	I0531 19:02:13.243136   97386 command_runner.go:130] >       },
	I0531 19:02:13.243140   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.243147   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.243151   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.243176   97386 command_runner.go:130] >     },
	I0531 19:02:13.243182   97386 command_runner.go:130] >     {
	I0531 19:02:13.243188   97386 command_runner.go:130] >       "id": "ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12",
	I0531 19:02:13.243195   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.243200   97386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.2"
	I0531 19:02:13.243206   97386 command_runner.go:130] >       ],
	I0531 19:02:13.243211   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.243222   97386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:279461bc1c0b4753dc83677a927b9f7827012b3adbcaa5df9dfd4af8b0987bc6",
	I0531 19:02:13.243233   97386 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"
	I0531 19:02:13.243238   97386 command_runner.go:130] >       ],
	I0531 19:02:13.243246   97386 command_runner.go:130] >       "size": "113906988",
	I0531 19:02:13.243250   97386 command_runner.go:130] >       "uid": {
	I0531 19:02:13.243262   97386 command_runner.go:130] >         "value": "0"
	I0531 19:02:13.243271   97386 command_runner.go:130] >       },
	I0531 19:02:13.243277   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.243287   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.243295   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.243304   97386 command_runner.go:130] >     },
	I0531 19:02:13.243310   97386 command_runner.go:130] >     {
	I0531 19:02:13.243322   97386 command_runner.go:130] >       "id": "b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee",
	I0531 19:02:13.243328   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.243338   97386 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.2"
	I0531 19:02:13.243342   97386 command_runner.go:130] >       ],
	I0531 19:02:13.243347   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.243354   97386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f",
	I0531 19:02:13.243365   97386 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:931b8fa2393b7e2a926afbfd24784153760b999ddbf2059f2cb652510ecdef83"
	I0531 19:02:13.243369   97386 command_runner.go:130] >       ],
	I0531 19:02:13.243373   97386 command_runner.go:130] >       "size": "72709527",
	I0531 19:02:13.243382   97386 command_runner.go:130] >       "uid": null,
	I0531 19:02:13.243386   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.243391   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.243396   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.243399   97386 command_runner.go:130] >     },
	I0531 19:02:13.243406   97386 command_runner.go:130] >     {
	I0531 19:02:13.243412   97386 command_runner.go:130] >       "id": "89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0",
	I0531 19:02:13.243419   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.243424   97386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.2"
	I0531 19:02:13.243429   97386 command_runner.go:130] >       ],
	I0531 19:02:13.243433   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.243482   97386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177",
	I0531 19:02:13.243499   97386 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f8be7505892d1671a15afa3ac6c3b31e50da87dd59a4745e30a5b3f9f584ee6e"
	I0531 19:02:13.243506   97386 command_runner.go:130] >       ],
	I0531 19:02:13.243514   97386 command_runner.go:130] >       "size": "59802924",
	I0531 19:02:13.243525   97386 command_runner.go:130] >       "uid": {
	I0531 19:02:13.243533   97386 command_runner.go:130] >         "value": "0"
	I0531 19:02:13.243538   97386 command_runner.go:130] >       },
	I0531 19:02:13.243547   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.243552   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.243556   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.243563   97386 command_runner.go:130] >     },
	I0531 19:02:13.243570   97386 command_runner.go:130] >     {
	I0531 19:02:13.243588   97386 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0531 19:02:13.243600   97386 command_runner.go:130] >       "repoTags": [
	I0531 19:02:13.243609   97386 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0531 19:02:13.243619   97386 command_runner.go:130] >       ],
	I0531 19:02:13.243625   97386 command_runner.go:130] >       "repoDigests": [
	I0531 19:02:13.243634   97386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0531 19:02:13.243641   97386 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0531 19:02:13.243675   97386 command_runner.go:130] >       ],
	I0531 19:02:13.243687   97386 command_runner.go:130] >       "size": "750414",
	I0531 19:02:13.243691   97386 command_runner.go:130] >       "uid": {
	I0531 19:02:13.243698   97386 command_runner.go:130] >         "value": "65535"
	I0531 19:02:13.243703   97386 command_runner.go:130] >       },
	I0531 19:02:13.243710   97386 command_runner.go:130] >       "username": "",
	I0531 19:02:13.243714   97386 command_runner.go:130] >       "spec": null,
	I0531 19:02:13.243721   97386 command_runner.go:130] >       "pinned": false
	I0531 19:02:13.243725   97386 command_runner.go:130] >     }
	I0531 19:02:13.243731   97386 command_runner.go:130] >   ]
	I0531 19:02:13.243735   97386 command_runner.go:130] > }
	I0531 19:02:13.244419   97386 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 19:02:13.244436   97386 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:02:13.244488   97386 ssh_runner.go:195] Run: crio config
	I0531 19:02:13.278421   97386 command_runner.go:130] ! time="2023-05-31 19:02:13.277993123Z" level=info msg="Starting CRI-O, version: 1.24.5, git: b007cb6753d97de6218787b6894b0e3cc1dc8ecd(clean)"
	I0531 19:02:13.278455   97386 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0531 19:02:13.283195   97386 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0531 19:02:13.283220   97386 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0531 19:02:13.283234   97386 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0531 19:02:13.283253   97386 command_runner.go:130] > #
	I0531 19:02:13.283273   97386 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0531 19:02:13.283289   97386 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0531 19:02:13.283304   97386 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0531 19:02:13.283317   97386 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0531 19:02:13.283325   97386 command_runner.go:130] > # reload'.
	I0531 19:02:13.283331   97386 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0531 19:02:13.283340   97386 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0531 19:02:13.283346   97386 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0531 19:02:13.283354   97386 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0531 19:02:13.283361   97386 command_runner.go:130] > [crio]
	I0531 19:02:13.283367   97386 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0531 19:02:13.283375   97386 command_runner.go:130] > # containers images, in this directory.
	I0531 19:02:13.283387   97386 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0531 19:02:13.283396   97386 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0531 19:02:13.283407   97386 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0531 19:02:13.283416   97386 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0531 19:02:13.283424   97386 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0531 19:02:13.283429   97386 command_runner.go:130] > # storage_driver = "vfs"
	I0531 19:02:13.283438   97386 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0531 19:02:13.283445   97386 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0531 19:02:13.283452   97386 command_runner.go:130] > # storage_option = [
	I0531 19:02:13.283456   97386 command_runner.go:130] > # ]
	I0531 19:02:13.283464   97386 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0531 19:02:13.283487   97386 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0531 19:02:13.283500   97386 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0531 19:02:13.283506   97386 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0531 19:02:13.283515   97386 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0531 19:02:13.283520   97386 command_runner.go:130] > # always happen on a node reboot
	I0531 19:02:13.283525   97386 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0531 19:02:13.283534   97386 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0531 19:02:13.283540   97386 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0531 19:02:13.283552   97386 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0531 19:02:13.283560   97386 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0531 19:02:13.283570   97386 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0531 19:02:13.283581   97386 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0531 19:02:13.283587   97386 command_runner.go:130] > # internal_wipe = true
	I0531 19:02:13.283598   97386 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0531 19:02:13.283607   97386 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0531 19:02:13.283615   97386 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0531 19:02:13.283621   97386 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0531 19:02:13.283629   97386 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0531 19:02:13.283636   97386 command_runner.go:130] > [crio.api]
	I0531 19:02:13.283641   97386 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0531 19:02:13.283648   97386 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0531 19:02:13.283654   97386 command_runner.go:130] > # IP address on which the stream server will listen.
	I0531 19:02:13.283664   97386 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0531 19:02:13.283674   97386 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0531 19:02:13.283680   97386 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0531 19:02:13.283686   97386 command_runner.go:130] > # stream_port = "0"
	I0531 19:02:13.283692   97386 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0531 19:02:13.283703   97386 command_runner.go:130] > # stream_enable_tls = false
	I0531 19:02:13.283713   97386 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0531 19:02:13.283720   97386 command_runner.go:130] > # stream_idle_timeout = ""
	I0531 19:02:13.283727   97386 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0531 19:02:13.283735   97386 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0531 19:02:13.283742   97386 command_runner.go:130] > # minutes.
	I0531 19:02:13.283747   97386 command_runner.go:130] > # stream_tls_cert = ""
	I0531 19:02:13.283755   97386 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0531 19:02:13.283765   97386 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0531 19:02:13.283772   97386 command_runner.go:130] > # stream_tls_key = ""
	I0531 19:02:13.283778   97386 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0531 19:02:13.283787   97386 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0531 19:02:13.283792   97386 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0531 19:02:13.283799   97386 command_runner.go:130] > # stream_tls_ca = ""
	I0531 19:02:13.283806   97386 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0531 19:02:13.283814   97386 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0531 19:02:13.283821   97386 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0531 19:02:13.283828   97386 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0531 19:02:13.283846   97386 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0531 19:02:13.283855   97386 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0531 19:02:13.283861   97386 command_runner.go:130] > [crio.runtime]
	I0531 19:02:13.283867   97386 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0531 19:02:13.283875   97386 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0531 19:02:13.283882   97386 command_runner.go:130] > # "nofile=1024:2048"
	I0531 19:02:13.283891   97386 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0531 19:02:13.283900   97386 command_runner.go:130] > # default_ulimits = [
	I0531 19:02:13.283906   97386 command_runner.go:130] > # ]
	I0531 19:02:13.283913   97386 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0531 19:02:13.283919   97386 command_runner.go:130] > # no_pivot = false
	I0531 19:02:13.283925   97386 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0531 19:02:13.283934   97386 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0531 19:02:13.283941   97386 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0531 19:02:13.283947   97386 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0531 19:02:13.283955   97386 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0531 19:02:13.283962   97386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:02:13.283968   97386 command_runner.go:130] > # conmon = ""
	I0531 19:02:13.283973   97386 command_runner.go:130] > # Cgroup setting for conmon
	I0531 19:02:13.283981   97386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0531 19:02:13.283986   97386 command_runner.go:130] > conmon_cgroup = "pod"
	I0531 19:02:13.284010   97386 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0531 19:02:13.284021   97386 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0531 19:02:13.284031   97386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:02:13.284038   97386 command_runner.go:130] > # conmon_env = [
	I0531 19:02:13.284042   97386 command_runner.go:130] > # ]
	I0531 19:02:13.284050   97386 command_runner.go:130] > # Additional environment variables to set for all the
	I0531 19:02:13.284058   97386 command_runner.go:130] > # containers. These are overridden if set in the
	I0531 19:02:13.284064   97386 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0531 19:02:13.284071   97386 command_runner.go:130] > # default_env = [
	I0531 19:02:13.284074   97386 command_runner.go:130] > # ]
	I0531 19:02:13.284082   97386 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0531 19:02:13.284086   97386 command_runner.go:130] > # selinux = false
	I0531 19:02:13.284097   97386 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0531 19:02:13.284106   97386 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0531 19:02:13.284114   97386 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0531 19:02:13.284121   97386 command_runner.go:130] > # seccomp_profile = ""
	I0531 19:02:13.284127   97386 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0531 19:02:13.284135   97386 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0531 19:02:13.284144   97386 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0531 19:02:13.284148   97386 command_runner.go:130] > # which might increase security.
	I0531 19:02:13.284156   97386 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0531 19:02:13.284169   97386 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0531 19:02:13.284178   97386 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0531 19:02:13.284187   97386 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0531 19:02:13.284200   97386 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0531 19:02:13.284208   97386 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:02:13.284216   97386 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0531 19:02:13.284222   97386 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0531 19:02:13.284229   97386 command_runner.go:130] > # the cgroup blockio controller.
	I0531 19:02:13.284233   97386 command_runner.go:130] > # blockio_config_file = ""
	I0531 19:02:13.284242   97386 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0531 19:02:13.284249   97386 command_runner.go:130] > # irqbalance daemon.
	I0531 19:02:13.284255   97386 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0531 19:02:13.284265   97386 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0531 19:02:13.284273   97386 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:02:13.284280   97386 command_runner.go:130] > # rdt_config_file = ""
	I0531 19:02:13.284286   97386 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0531 19:02:13.284313   97386 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0531 19:02:13.284326   97386 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0531 19:02:13.284335   97386 command_runner.go:130] > # separate_pull_cgroup = ""
	I0531 19:02:13.284341   97386 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0531 19:02:13.284350   97386 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0531 19:02:13.284356   97386 command_runner.go:130] > # will be added.
	I0531 19:02:13.284361   97386 command_runner.go:130] > # default_capabilities = [
	I0531 19:02:13.284367   97386 command_runner.go:130] > # 	"CHOWN",
	I0531 19:02:13.284371   97386 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0531 19:02:13.284378   97386 command_runner.go:130] > # 	"FSETID",
	I0531 19:02:13.284382   97386 command_runner.go:130] > # 	"FOWNER",
	I0531 19:02:13.284388   97386 command_runner.go:130] > # 	"SETGID",
	I0531 19:02:13.284392   97386 command_runner.go:130] > # 	"SETUID",
	I0531 19:02:13.284398   97386 command_runner.go:130] > # 	"SETPCAP",
	I0531 19:02:13.284402   97386 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0531 19:02:13.284406   97386 command_runner.go:130] > # 	"KILL",
	I0531 19:02:13.284412   97386 command_runner.go:130] > # ]
	I0531 19:02:13.284420   97386 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0531 19:02:13.284429   97386 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0531 19:02:13.284437   97386 command_runner.go:130] > # add_inheritable_capabilities = true
	I0531 19:02:13.284447   97386 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0531 19:02:13.284455   97386 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:02:13.284462   97386 command_runner.go:130] > # default_sysctls = [
	I0531 19:02:13.284466   97386 command_runner.go:130] > # ]
	I0531 19:02:13.284473   97386 command_runner.go:130] > # List of devices on the host that a
	I0531 19:02:13.284479   97386 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0531 19:02:13.284483   97386 command_runner.go:130] > # allowed_devices = [
	I0531 19:02:13.284490   97386 command_runner.go:130] > # 	"/dev/fuse",
	I0531 19:02:13.284494   97386 command_runner.go:130] > # ]
	I0531 19:02:13.284504   97386 command_runner.go:130] > # List of additional devices. specified as
	I0531 19:02:13.284527   97386 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0531 19:02:13.284536   97386 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0531 19:02:13.284544   97386 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:02:13.284552   97386 command_runner.go:130] > # additional_devices = [
	I0531 19:02:13.284555   97386 command_runner.go:130] > # ]
	I0531 19:02:13.284562   97386 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0531 19:02:13.284567   97386 command_runner.go:130] > # cdi_spec_dirs = [
	I0531 19:02:13.284575   97386 command_runner.go:130] > # 	"/etc/cdi",
	I0531 19:02:13.284580   97386 command_runner.go:130] > # 	"/var/run/cdi",
	I0531 19:02:13.284586   97386 command_runner.go:130] > # ]
	I0531 19:02:13.284593   97386 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0531 19:02:13.284601   97386 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0531 19:02:13.284608   97386 command_runner.go:130] > # Defaults to false.
	I0531 19:02:13.284613   97386 command_runner.go:130] > # device_ownership_from_security_context = false
	I0531 19:02:13.284623   97386 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0531 19:02:13.284632   97386 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0531 19:02:13.284639   97386 command_runner.go:130] > # hooks_dir = [
	I0531 19:02:13.284644   97386 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0531 19:02:13.284650   97386 command_runner.go:130] > # ]
	I0531 19:02:13.284656   97386 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0531 19:02:13.284664   97386 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0531 19:02:13.284673   97386 command_runner.go:130] > # its default mounts from the following two files:
	I0531 19:02:13.284680   97386 command_runner.go:130] > #
	I0531 19:02:13.284686   97386 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0531 19:02:13.284695   97386 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0531 19:02:13.284703   97386 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0531 19:02:13.284709   97386 command_runner.go:130] > #
	I0531 19:02:13.284715   97386 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0531 19:02:13.284724   97386 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0531 19:02:13.284733   97386 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0531 19:02:13.284740   97386 command_runner.go:130] > #      only add mounts it finds in this file.
	I0531 19:02:13.284744   97386 command_runner.go:130] > #
	I0531 19:02:13.284751   97386 command_runner.go:130] > # default_mounts_file = ""
	I0531 19:02:13.284756   97386 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0531 19:02:13.284765   97386 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0531 19:02:13.284772   97386 command_runner.go:130] > # pids_limit = 0
	I0531 19:02:13.284778   97386 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0531 19:02:13.284786   97386 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0531 19:02:13.284797   97386 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0531 19:02:13.284815   97386 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0531 19:02:13.284823   97386 command_runner.go:130] > # log_size_max = -1
	I0531 19:02:13.284830   97386 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0531 19:02:13.284837   97386 command_runner.go:130] > # log_to_journald = false
	I0531 19:02:13.284844   97386 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0531 19:02:13.284851   97386 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0531 19:02:13.284858   97386 command_runner.go:130] > # Path to directory for container attach sockets.
	I0531 19:02:13.284866   97386 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0531 19:02:13.284875   97386 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0531 19:02:13.284881   97386 command_runner.go:130] > # bind_mount_prefix = ""
	I0531 19:02:13.284887   97386 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0531 19:02:13.284894   97386 command_runner.go:130] > # read_only = false
	I0531 19:02:13.284900   97386 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0531 19:02:13.284908   97386 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0531 19:02:13.284912   97386 command_runner.go:130] > # live configuration reload.
	I0531 19:02:13.284919   97386 command_runner.go:130] > # log_level = "info"
	I0531 19:02:13.284925   97386 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0531 19:02:13.284932   97386 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:02:13.284940   97386 command_runner.go:130] > # log_filter = ""
	I0531 19:02:13.284946   97386 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0531 19:02:13.284954   97386 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0531 19:02:13.284960   97386 command_runner.go:130] > # separated by comma.
	I0531 19:02:13.284965   97386 command_runner.go:130] > # uid_mappings = ""
	I0531 19:02:13.284974   97386 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0531 19:02:13.284980   97386 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0531 19:02:13.284988   97386 command_runner.go:130] > # separated by comma.
	I0531 19:02:13.284995   97386 command_runner.go:130] > # gid_mappings = ""
	I0531 19:02:13.285002   97386 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0531 19:02:13.285010   97386 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:02:13.285019   97386 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:02:13.285025   97386 command_runner.go:130] > # minimum_mappable_uid = -1
	I0531 19:02:13.285032   97386 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0531 19:02:13.285040   97386 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:02:13.285049   97386 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:02:13.285055   97386 command_runner.go:130] > # minimum_mappable_gid = -1
	I0531 19:02:13.285061   97386 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0531 19:02:13.285070   97386 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0531 19:02:13.285078   97386 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0531 19:02:13.285085   97386 command_runner.go:130] > # ctr_stop_timeout = 30
	I0531 19:02:13.285091   97386 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0531 19:02:13.285103   97386 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0531 19:02:13.285111   97386 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0531 19:02:13.285119   97386 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0531 19:02:13.285127   97386 command_runner.go:130] > # drop_infra_ctr = true
	I0531 19:02:13.285133   97386 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0531 19:02:13.285141   97386 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0531 19:02:13.285150   97386 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0531 19:02:13.285155   97386 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0531 19:02:13.285169   97386 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0531 19:02:13.285177   97386 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0531 19:02:13.285184   97386 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0531 19:02:13.285191   97386 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0531 19:02:13.285198   97386 command_runner.go:130] > # pinns_path = ""
	I0531 19:02:13.285204   97386 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0531 19:02:13.285213   97386 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0531 19:02:13.285221   97386 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0531 19:02:13.285228   97386 command_runner.go:130] > # default_runtime = "runc"
	I0531 19:02:13.285233   97386 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0531 19:02:13.285243   97386 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0531 19:02:13.285255   97386 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0531 19:02:13.285263   97386 command_runner.go:130] > # creation as a file is not desired either.
	I0531 19:02:13.285274   97386 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0531 19:02:13.285281   97386 command_runner.go:130] > # the hostname is being managed dynamically.
	I0531 19:02:13.285286   97386 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0531 19:02:13.285292   97386 command_runner.go:130] > # ]
	I0531 19:02:13.285298   97386 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0531 19:02:13.285307   97386 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0531 19:02:13.285316   97386 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0531 19:02:13.285322   97386 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0531 19:02:13.285328   97386 command_runner.go:130] > #
	I0531 19:02:13.285332   97386 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0531 19:02:13.285341   97386 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0531 19:02:13.285349   97386 command_runner.go:130] > #  runtime_type = "oci"
	I0531 19:02:13.285357   97386 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0531 19:02:13.285365   97386 command_runner.go:130] > #  privileged_without_host_devices = false
	I0531 19:02:13.285369   97386 command_runner.go:130] > #  allowed_annotations = []
	I0531 19:02:13.285376   97386 command_runner.go:130] > # Where:
	I0531 19:02:13.285382   97386 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0531 19:02:13.285392   97386 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0531 19:02:13.285401   97386 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0531 19:02:13.285407   97386 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0531 19:02:13.285413   97386 command_runner.go:130] > #   in $PATH.
	I0531 19:02:13.285419   97386 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0531 19:02:13.285427   97386 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0531 19:02:13.285433   97386 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0531 19:02:13.285439   97386 command_runner.go:130] > #   state.
	I0531 19:02:13.285446   97386 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0531 19:02:13.285454   97386 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0531 19:02:13.285462   97386 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0531 19:02:13.285470   97386 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0531 19:02:13.285476   97386 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0531 19:02:13.285485   97386 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0531 19:02:13.285490   97386 command_runner.go:130] > #   The currently recognized values are:
	I0531 19:02:13.285499   97386 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0531 19:02:13.285509   97386 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0531 19:02:13.285517   97386 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0531 19:02:13.285527   97386 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0531 19:02:13.285537   97386 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0531 19:02:13.285546   97386 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0531 19:02:13.285555   97386 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0531 19:02:13.285565   97386 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0531 19:02:13.285573   97386 command_runner.go:130] > #   should be moved to the container's cgroup
	I0531 19:02:13.285578   97386 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0531 19:02:13.285585   97386 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0531 19:02:13.285589   97386 command_runner.go:130] > runtime_type = "oci"
	I0531 19:02:13.285596   97386 command_runner.go:130] > runtime_root = "/run/runc"
	I0531 19:02:13.285600   97386 command_runner.go:130] > runtime_config_path = ""
	I0531 19:02:13.285607   97386 command_runner.go:130] > monitor_path = ""
	I0531 19:02:13.285611   97386 command_runner.go:130] > monitor_cgroup = ""
	I0531 19:02:13.285618   97386 command_runner.go:130] > monitor_exec_cgroup = ""
	I0531 19:02:13.285645   97386 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0531 19:02:13.285655   97386 command_runner.go:130] > # running containers
	I0531 19:02:13.285662   97386 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0531 19:02:13.285669   97386 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0531 19:02:13.285679   97386 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0531 19:02:13.285687   97386 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0531 19:02:13.285695   97386 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0531 19:02:13.285702   97386 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0531 19:02:13.285707   97386 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0531 19:02:13.285714   97386 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0531 19:02:13.285719   97386 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0531 19:02:13.285727   97386 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0531 19:02:13.285737   97386 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0531 19:02:13.285745   97386 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0531 19:02:13.285752   97386 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0531 19:02:13.285761   97386 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0531 19:02:13.285771   97386 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0531 19:02:13.285779   97386 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0531 19:02:13.285790   97386 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0531 19:02:13.285800   97386 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0531 19:02:13.285807   97386 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0531 19:02:13.285816   97386 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0531 19:02:13.285822   97386 command_runner.go:130] > # Example:
	I0531 19:02:13.285827   97386 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0531 19:02:13.285835   97386 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0531 19:02:13.285840   97386 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0531 19:02:13.285847   97386 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0531 19:02:13.285854   97386 command_runner.go:130] > # cpuset = 0
	I0531 19:02:13.285859   97386 command_runner.go:130] > # cpushares = "0-1"
	I0531 19:02:13.285865   97386 command_runner.go:130] > # Where:
	I0531 19:02:13.285869   97386 command_runner.go:130] > # The workload name is workload-type.
	I0531 19:02:13.285879   97386 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0531 19:02:13.285886   97386 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0531 19:02:13.285896   97386 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0531 19:02:13.285907   97386 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0531 19:02:13.285915   97386 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0531 19:02:13.285922   97386 command_runner.go:130] > # 
	I0531 19:02:13.285929   97386 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0531 19:02:13.285935   97386 command_runner.go:130] > #
	I0531 19:02:13.285942   97386 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0531 19:02:13.285951   97386 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0531 19:02:13.285959   97386 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0531 19:02:13.285967   97386 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0531 19:02:13.285973   97386 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0531 19:02:13.285979   97386 command_runner.go:130] > [crio.image]
	I0531 19:02:13.285985   97386 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0531 19:02:13.285992   97386 command_runner.go:130] > # default_transport = "docker://"
	I0531 19:02:13.285998   97386 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0531 19:02:13.286007   97386 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:02:13.286013   97386 command_runner.go:130] > # global_auth_file = ""
	I0531 19:02:13.286019   97386 command_runner.go:130] > # The image used to instantiate infra containers.
	I0531 19:02:13.286026   97386 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:02:13.286034   97386 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0531 19:02:13.286040   97386 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0531 19:02:13.286048   97386 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:02:13.286056   97386 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:02:13.286062   97386 command_runner.go:130] > # pause_image_auth_file = ""
	I0531 19:02:13.286068   97386 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0531 19:02:13.286078   97386 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0531 19:02:13.286085   97386 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0531 19:02:13.286094   97386 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0531 19:02:13.286101   97386 command_runner.go:130] > # pause_command = "/pause"
	I0531 19:02:13.286107   97386 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0531 19:02:13.286115   97386 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0531 19:02:13.286124   97386 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0531 19:02:13.286130   97386 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0531 19:02:13.286137   97386 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0531 19:02:13.286142   97386 command_runner.go:130] > # signature_policy = ""
	I0531 19:02:13.286153   97386 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0531 19:02:13.286165   97386 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0531 19:02:13.286172   97386 command_runner.go:130] > # changing them here.
	I0531 19:02:13.286180   97386 command_runner.go:130] > # insecure_registries = [
	I0531 19:02:13.286186   97386 command_runner.go:130] > # ]
	I0531 19:02:13.286192   97386 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0531 19:02:13.286201   97386 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0531 19:02:13.286208   97386 command_runner.go:130] > # image_volumes = "mkdir"
	I0531 19:02:13.286215   97386 command_runner.go:130] > # Temporary directory to use for storing big files
	I0531 19:02:13.286222   97386 command_runner.go:130] > # big_files_temporary_dir = ""
	I0531 19:02:13.286228   97386 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0531 19:02:13.286234   97386 command_runner.go:130] > # CNI plugins.
	I0531 19:02:13.286239   97386 command_runner.go:130] > [crio.network]
	I0531 19:02:13.286247   97386 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0531 19:02:13.286252   97386 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0531 19:02:13.286261   97386 command_runner.go:130] > # cni_default_network = ""
	I0531 19:02:13.286270   97386 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0531 19:02:13.286278   97386 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0531 19:02:13.286283   97386 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0531 19:02:13.286290   97386 command_runner.go:130] > # plugin_dirs = [
	I0531 19:02:13.286294   97386 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0531 19:02:13.286300   97386 command_runner.go:130] > # ]
	I0531 19:02:13.286306   97386 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0531 19:02:13.286312   97386 command_runner.go:130] > [crio.metrics]
	I0531 19:02:13.286318   97386 command_runner.go:130] > # Globally enable or disable metrics support.
	I0531 19:02:13.286325   97386 command_runner.go:130] > # enable_metrics = false
	I0531 19:02:13.286333   97386 command_runner.go:130] > # Specify enabled metrics collectors.
	I0531 19:02:13.286338   97386 command_runner.go:130] > # Per default all metrics are enabled.
	I0531 19:02:13.286345   97386 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0531 19:02:13.286353   97386 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0531 19:02:13.286361   97386 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0531 19:02:13.286368   97386 command_runner.go:130] > # metrics_collectors = [
	I0531 19:02:13.286372   97386 command_runner.go:130] > # 	"operations",
	I0531 19:02:13.286380   97386 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0531 19:02:13.286384   97386 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0531 19:02:13.286391   97386 command_runner.go:130] > # 	"operations_errors",
	I0531 19:02:13.286395   97386 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0531 19:02:13.286402   97386 command_runner.go:130] > # 	"image_pulls_by_name",
	I0531 19:02:13.286407   97386 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0531 19:02:13.286413   97386 command_runner.go:130] > # 	"image_pulls_failures",
	I0531 19:02:13.286418   97386 command_runner.go:130] > # 	"image_pulls_successes",
	I0531 19:02:13.286424   97386 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0531 19:02:13.286430   97386 command_runner.go:130] > # 	"image_layer_reuse",
	I0531 19:02:13.286438   97386 command_runner.go:130] > # 	"containers_oom_total",
	I0531 19:02:13.286445   97386 command_runner.go:130] > # 	"containers_oom",
	I0531 19:02:13.286450   97386 command_runner.go:130] > # 	"processes_defunct",
	I0531 19:02:13.286456   97386 command_runner.go:130] > # 	"operations_total",
	I0531 19:02:13.286461   97386 command_runner.go:130] > # 	"operations_latency_seconds",
	I0531 19:02:13.286471   97386 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0531 19:02:13.286478   97386 command_runner.go:130] > # 	"operations_errors_total",
	I0531 19:02:13.286483   97386 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0531 19:02:13.286490   97386 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0531 19:02:13.286495   97386 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0531 19:02:13.286501   97386 command_runner.go:130] > # 	"image_pulls_success_total",
	I0531 19:02:13.286506   97386 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0531 19:02:13.286513   97386 command_runner.go:130] > # 	"containers_oom_count_total",
	I0531 19:02:13.286517   97386 command_runner.go:130] > # ]
	I0531 19:02:13.286525   97386 command_runner.go:130] > # The port on which the metrics server will listen.
	I0531 19:02:13.286529   97386 command_runner.go:130] > # metrics_port = 9090
	I0531 19:02:13.286536   97386 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0531 19:02:13.286542   97386 command_runner.go:130] > # metrics_socket = ""
	I0531 19:02:13.286547   97386 command_runner.go:130] > # The certificate for the secure metrics server.
	I0531 19:02:13.286556   97386 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0531 19:02:13.286565   97386 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0531 19:02:13.286572   97386 command_runner.go:130] > # certificate on any modification event.
	I0531 19:02:13.286576   97386 command_runner.go:130] > # metrics_cert = ""
	I0531 19:02:13.286584   97386 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0531 19:02:13.286589   97386 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0531 19:02:13.286595   97386 command_runner.go:130] > # metrics_key = ""
	I0531 19:02:13.286601   97386 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0531 19:02:13.286608   97386 command_runner.go:130] > [crio.tracing]
	I0531 19:02:13.286614   97386 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0531 19:02:13.286622   97386 command_runner.go:130] > # enable_tracing = false
	I0531 19:02:13.286630   97386 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0531 19:02:13.286635   97386 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0531 19:02:13.286641   97386 command_runner.go:130] > # Number of samples to collect per million spans.
	I0531 19:02:13.286648   97386 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0531 19:02:13.286654   97386 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0531 19:02:13.286663   97386 command_runner.go:130] > [crio.stats]
	I0531 19:02:13.286669   97386 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0531 19:02:13.286678   97386 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0531 19:02:13.286682   97386 command_runner.go:130] > # stats_collection_period = 0
	I0531 19:02:13.286749   97386 cni.go:84] Creating CNI manager for ""
	I0531 19:02:13.286762   97386 cni.go:136] 1 nodes found, recommending kindnet
	I0531 19:02:13.286771   97386 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:02:13.286790   97386 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-697136 NodeName:multinode-697136 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:02:13.286909   97386 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-697136"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:02:13.286976   97386 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-697136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-697136 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 19:02:13.287024   97386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0531 19:02:13.294811   97386 command_runner.go:130] > kubeadm
	I0531 19:02:13.294830   97386 command_runner.go:130] > kubectl
	I0531 19:02:13.294835   97386 command_runner.go:130] > kubelet
	I0531 19:02:13.294851   97386 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:02:13.294899   97386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:02:13.302008   97386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0531 19:02:13.316684   97386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:02:13.331926   97386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0531 19:02:13.346674   97386 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:02:13.349609   97386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:02:13.359150   97386 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136 for IP: 192.168.58.2
	I0531 19:02:13.359186   97386 certs.go:190] acquiring lock for shared ca certs: {Name:mkbc42e9eaddef0752bd9f3cb948d1ed478bdf0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:02:13.359359   97386 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key
	I0531 19:02:13.359419   97386 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key
	I0531 19:02:13.359466   97386 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.key
	I0531 19:02:13.359485   97386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.crt with IP's: []
	I0531 19:02:13.537814   97386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.crt ...
	I0531 19:02:13.537845   97386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.crt: {Name:mk2587f30da3b7cb51c2b5b47baee978a29336f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:02:13.538042   97386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.key ...
	I0531 19:02:13.538057   97386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.key: {Name:mk594841d3fd9b1aa9e4ccf533530d175cf9e034 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:02:13.538155   97386 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.key.cee25041
	I0531 19:02:13.538175   97386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0531 19:02:13.633042   97386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.crt.cee25041 ...
	I0531 19:02:13.633076   97386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.crt.cee25041: {Name:mk13ab7f6d09614be2424283efedd1f51aca693b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:02:13.633241   97386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.key.cee25041 ...
	I0531 19:02:13.633252   97386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.key.cee25041: {Name:mk8fb5532729b7b9ff8cc4947a1bbcd6d4dab58a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:02:13.633320   97386 certs.go:337] copying /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.crt
	I0531 19:02:13.633392   97386 certs.go:341] copying /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.key
	I0531 19:02:13.633440   97386 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/proxy-client.key
	I0531 19:02:13.633453   97386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/proxy-client.crt with IP's: []
	I0531 19:02:13.822924   97386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/proxy-client.crt ...
	I0531 19:02:13.822956   97386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/proxy-client.crt: {Name:mkb2abe0388504920c8a4cea4df716eeb073cff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:02:13.823123   97386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/proxy-client.key ...
	I0531 19:02:13.823134   97386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/proxy-client.key: {Name:mk0390ecd6324bea98cd79c21c275ca8dbdef986 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:02:13.823192   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 19:02:13.823209   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 19:02:13.823219   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 19:02:13.823232   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 19:02:13.823248   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 19:02:13.823260   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 19:02:13.823275   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 19:02:13.823289   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 19:02:13.823344   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232.pem (1338 bytes)
	W0531 19:02:13.823378   97386 certs.go:433] ignoring /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232_empty.pem, impossibly tiny 0 bytes
	I0531 19:02:13.823391   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem (1679 bytes)
	I0531 19:02:13.823414   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem (1078 bytes)
	I0531 19:02:13.823437   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:02:13.823463   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem (1675 bytes)
	I0531 19:02:13.823500   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem (1708 bytes)
	I0531 19:02:13.823526   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232.pem -> /usr/share/ca-certificates/14232.pem
	I0531 19:02:13.823539   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> /usr/share/ca-certificates/142322.pem
	I0531 19:02:13.823554   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:02:13.824095   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 19:02:13.845487   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 19:02:13.866208   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:02:13.887220   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 19:02:13.908080   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:02:13.929380   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:02:13.950144   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:02:13.970580   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 19:02:13.990961   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232.pem --> /usr/share/ca-certificates/14232.pem (1338 bytes)
	I0531 19:02:14.010996   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem --> /usr/share/ca-certificates/142322.pem (1708 bytes)
	I0531 19:02:14.031020   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:02:14.051265   97386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:02:14.066240   97386 ssh_runner.go:195] Run: openssl version
	I0531 19:02:14.070794   97386 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0531 19:02:14.070912   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142322.pem && ln -fs /usr/share/ca-certificates/142322.pem /etc/ssl/certs/142322.pem"
	I0531 19:02:14.079052   97386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142322.pem
	I0531 19:02:14.082199   97386 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 31 18:49 /usr/share/ca-certificates/142322.pem
	I0531 19:02:14.082238   97386 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 31 18:49 /usr/share/ca-certificates/142322.pem
	I0531 19:02:14.082277   97386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142322.pem
	I0531 19:02:14.088272   97386 command_runner.go:130] > 3ec20f2e
	I0531 19:02:14.088348   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142322.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:02:14.096437   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:02:14.104454   97386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:02:14.107441   97386 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 31 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:02:14.107488   97386 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:02:14.107531   97386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:02:14.113237   97386 command_runner.go:130] > b5213941
	I0531 19:02:14.113408   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:02:14.121406   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14232.pem && ln -fs /usr/share/ca-certificates/14232.pem /etc/ssl/certs/14232.pem"
	I0531 19:02:14.129643   97386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14232.pem
	I0531 19:02:14.132673   97386 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 31 18:49 /usr/share/ca-certificates/14232.pem
	I0531 19:02:14.132723   97386 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 31 18:49 /usr/share/ca-certificates/14232.pem
	I0531 19:02:14.132774   97386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14232.pem
	I0531 19:02:14.138704   97386 command_runner.go:130] > 51391683
	I0531 19:02:14.138785   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14232.pem /etc/ssl/certs/51391683.0"
	I0531 19:02:14.147168   97386 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 19:02:14.150150   97386 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 19:02:14.150200   97386 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 19:02:14.150241   97386 kubeadm.go:404] StartCluster: {Name:multinode-697136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-697136 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:02:14.150329   97386 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:02:14.150381   97386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:02:14.184398   97386 cri.go:88] found id: ""
	I0531 19:02:14.184463   97386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:02:14.191722   97386 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0531 19:02:14.191751   97386 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0531 19:02:14.191758   97386 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0531 19:02:14.192441   97386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:02:14.200156   97386 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0531 19:02:14.200202   97386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:02:14.207661   97386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0531 19:02:14.207689   97386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0531 19:02:14.207698   97386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0531 19:02:14.207718   97386 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:02:14.207763   97386 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:02:14.207804   97386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0531 19:02:14.251095   97386 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0531 19:02:14.251121   97386 command_runner.go:130] > [init] Using Kubernetes version: v1.27.2
	I0531 19:02:14.251177   97386 kubeadm.go:322] [preflight] Running pre-flight checks
	I0531 19:02:14.251187   97386 command_runner.go:130] > [preflight] Running pre-flight checks
	I0531 19:02:14.287178   97386 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0531 19:02:14.287209   97386 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0531 19:02:14.287292   97386 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1035-gcp
	I0531 19:02:14.287314   97386 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1035-gcp
	I0531 19:02:14.287364   97386 kubeadm.go:322] OS: Linux
	I0531 19:02:14.287374   97386 command_runner.go:130] > OS: Linux
	I0531 19:02:14.287436   97386 kubeadm.go:322] CGROUPS_CPU: enabled
	I0531 19:02:14.287447   97386 command_runner.go:130] > CGROUPS_CPU: enabled
	I0531 19:02:14.287507   97386 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0531 19:02:14.287520   97386 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0531 19:02:14.287589   97386 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0531 19:02:14.287602   97386 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0531 19:02:14.287641   97386 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0531 19:02:14.287648   97386 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0531 19:02:14.287694   97386 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0531 19:02:14.287705   97386 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0531 19:02:14.287769   97386 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0531 19:02:14.287779   97386 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0531 19:02:14.287842   97386 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0531 19:02:14.287852   97386 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0531 19:02:14.287906   97386 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0531 19:02:14.287919   97386 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0531 19:02:14.287992   97386 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0531 19:02:14.288003   97386 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0531 19:02:14.348642   97386 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 19:02:14.348658   97386 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 19:02:14.348816   97386 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 19:02:14.348839   97386 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 19:02:14.348982   97386 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 19:02:14.348994   97386 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 19:02:14.536339   97386 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 19:02:14.538642   97386 out.go:204]   - Generating certificates and keys ...
	I0531 19:02:14.536401   97386 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 19:02:14.538789   97386 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0531 19:02:14.538804   97386 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0531 19:02:14.538867   97386 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0531 19:02:14.538877   97386 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0531 19:02:14.778460   97386 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 19:02:14.778489   97386 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 19:02:15.205370   97386 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0531 19:02:15.205401   97386 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0531 19:02:15.511614   97386 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0531 19:02:15.511641   97386 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0531 19:02:15.739480   97386 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0531 19:02:15.739511   97386 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0531 19:02:15.898108   97386 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0531 19:02:15.898136   97386 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0531 19:02:15.898279   97386 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-697136] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0531 19:02:15.898305   97386 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-697136] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0531 19:02:16.005126   97386 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0531 19:02:16.005153   97386 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0531 19:02:16.005284   97386 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-697136] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0531 19:02:16.005295   97386 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-697136] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0531 19:02:16.211809   97386 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 19:02:16.211835   97386 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 19:02:16.424969   97386 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 19:02:16.424999   97386 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 19:02:16.534206   97386 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0531 19:02:16.534235   97386 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0531 19:02:16.534363   97386 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 19:02:16.534373   97386 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 19:02:16.894087   97386 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 19:02:16.894141   97386 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 19:02:17.136258   97386 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 19:02:17.136285   97386 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 19:02:17.298852   97386 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 19:02:17.298878   97386 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 19:02:17.402002   97386 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 19:02:17.402029   97386 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 19:02:17.409708   97386 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 19:02:17.409748   97386 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 19:02:17.411235   97386 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 19:02:17.411254   97386 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 19:02:17.411329   97386 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0531 19:02:17.411351   97386 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0531 19:02:17.483863   97386 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 19:02:17.487936   97386 out.go:204]   - Booting up control plane ...
	I0531 19:02:17.483960   97386 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 19:02:17.488073   97386 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 19:02:17.488089   97386 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 19:02:17.488255   97386 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 19:02:17.488270   97386 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 19:02:17.488356   97386 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 19:02:17.488370   97386 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 19:02:17.488661   97386 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 19:02:17.488681   97386 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 19:02:17.490813   97386 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 19:02:17.490828   97386 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 19:02:21.992518   97386 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.501661 seconds
	I0531 19:02:21.992549   97386 command_runner.go:130] > [apiclient] All control plane components are healthy after 4.501661 seconds
	I0531 19:02:21.992689   97386 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 19:02:21.992701   97386 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 19:02:22.004648   97386 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 19:02:22.004684   97386 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 19:02:22.525190   97386 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0531 19:02:22.525233   97386 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0531 19:02:22.525434   97386 kubeadm.go:322] [mark-control-plane] Marking the node multinode-697136 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0531 19:02:22.525451   97386 command_runner.go:130] > [mark-control-plane] Marking the node multinode-697136 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0531 19:02:23.035448   97386 kubeadm.go:322] [bootstrap-token] Using token: u9qchw.57pwant1jv6yh335
	I0531 19:02:23.037464   97386 out.go:204]   - Configuring RBAC rules ...
	I0531 19:02:23.035499   97386 command_runner.go:130] > [bootstrap-token] Using token: u9qchw.57pwant1jv6yh335
	I0531 19:02:23.037592   97386 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 19:02:23.037611   97386 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 19:02:23.041194   97386 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 19:02:23.041211   97386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 19:02:23.049584   97386 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 19:02:23.049606   97386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 19:02:23.052501   97386 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 19:02:23.052535   97386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 19:02:23.055149   97386 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 19:02:23.055169   97386 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 19:02:23.057987   97386 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 19:02:23.058004   97386 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 19:02:23.068373   97386 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 19:02:23.068406   97386 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 19:02:23.271957   97386 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0531 19:02:23.271982   97386 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0531 19:02:23.447514   97386 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0531 19:02:23.447544   97386 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0531 19:02:23.448817   97386 kubeadm.go:322] 
	I0531 19:02:23.448924   97386 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0531 19:02:23.448941   97386 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0531 19:02:23.448948   97386 kubeadm.go:322] 
	I0531 19:02:23.449038   97386 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0531 19:02:23.449053   97386 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0531 19:02:23.449059   97386 kubeadm.go:322] 
	I0531 19:02:23.449090   97386 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0531 19:02:23.449102   97386 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0531 19:02:23.449173   97386 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 19:02:23.449184   97386 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 19:02:23.449242   97386 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 19:02:23.449250   97386 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 19:02:23.449254   97386 kubeadm.go:322] 
	I0531 19:02:23.449322   97386 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0531 19:02:23.449332   97386 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0531 19:02:23.449338   97386 kubeadm.go:322] 
	I0531 19:02:23.449407   97386 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0531 19:02:23.449416   97386 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0531 19:02:23.449422   97386 kubeadm.go:322] 
	I0531 19:02:23.449483   97386 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0531 19:02:23.449493   97386 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0531 19:02:23.449586   97386 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 19:02:23.449593   97386 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 19:02:23.449675   97386 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 19:02:23.449690   97386 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 19:02:23.449701   97386 kubeadm.go:322] 
	I0531 19:02:23.449797   97386 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0531 19:02:23.449811   97386 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0531 19:02:23.449900   97386 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0531 19:02:23.449912   97386 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0531 19:02:23.449916   97386 kubeadm.go:322] 
	I0531 19:02:23.450013   97386 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token u9qchw.57pwant1jv6yh335 \
	I0531 19:02:23.450027   97386 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token u9qchw.57pwant1jv6yh335 \
	I0531 19:02:23.450148   97386 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:762176d172e4c2e2979887de61c98a5df6783b1700b9b76d8140f24ee64a7564 \
	I0531 19:02:23.450162   97386 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:762176d172e4c2e2979887de61c98a5df6783b1700b9b76d8140f24ee64a7564 \
	I0531 19:02:23.450189   97386 kubeadm.go:322] 	--control-plane 
	I0531 19:02:23.450200   97386 command_runner.go:130] > 	--control-plane 
	I0531 19:02:23.450207   97386 kubeadm.go:322] 
	I0531 19:02:23.450309   97386 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0531 19:02:23.450320   97386 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0531 19:02:23.450325   97386 kubeadm.go:322] 
	I0531 19:02:23.450437   97386 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token u9qchw.57pwant1jv6yh335 \
	I0531 19:02:23.450447   97386 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token u9qchw.57pwant1jv6yh335 \
	I0531 19:02:23.450566   97386 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:762176d172e4c2e2979887de61c98a5df6783b1700b9b76d8140f24ee64a7564 
	I0531 19:02:23.450580   97386 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:762176d172e4c2e2979887de61c98a5df6783b1700b9b76d8140f24ee64a7564 
	I0531 19:02:23.452955   97386 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1035-gcp\n", err: exit status 1
	I0531 19:02:23.452977   97386 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1035-gcp\n", err: exit status 1
	I0531 19:02:23.453095   97386 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 19:02:23.453108   97386 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 19:02:23.453348   97386 kubeadm.go:322] W0531 19:02:14.348432    1188 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 19:02:23.453371   97386 command_runner.go:130] ! W0531 19:02:14.348432    1188 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 19:02:23.453601   97386 kubeadm.go:322] W0531 19:02:17.488481    1188 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 19:02:23.453613   97386 command_runner.go:130] ! W0531 19:02:17.488481    1188 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0531 19:02:23.453645   97386 cni.go:84] Creating CNI manager for ""
	I0531 19:02:23.453665   97386 cni.go:136] 1 nodes found, recommending kindnet
	I0531 19:02:23.455954   97386 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 19:02:23.458009   97386 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 19:02:23.462259   97386 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0531 19:02:23.462286   97386 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0531 19:02:23.462298   97386 command_runner.go:130] > Device: 33h/51d	Inode: 804304      Links: 1
	I0531 19:02:23.462309   97386 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:02:23.462323   97386 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0531 19:02:23.462335   97386 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0531 19:02:23.462344   97386 command_runner.go:130] > Change: 2023-05-31 18:43:50.927836386 +0000
	I0531 19:02:23.462357   97386 command_runner.go:130] >  Birth: 2023-05-31 18:43:50.903834622 +0000
	I0531 19:02:23.462416   97386 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0531 19:02:23.462427   97386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 19:02:23.480195   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 19:02:24.141944   97386 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0531 19:02:24.146817   97386 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0531 19:02:24.153238   97386 command_runner.go:130] > serviceaccount/kindnet created
	I0531 19:02:24.163158   97386 command_runner.go:130] > daemonset.apps/kindnet created
	I0531 19:02:24.167179   97386 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 19:02:24.167282   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140 minikube.k8s.io/name=multinode-697136 minikube.k8s.io/updated_at=2023_05_31T19_02_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:24.167301   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:24.173812   97386 command_runner.go:130] > -16
	I0531 19:02:24.173839   97386 ops.go:34] apiserver oom_adj: -16
	I0531 19:02:24.256131   97386 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0531 19:02:24.260582   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:24.265019   97386 command_runner.go:130] > node/multinode-697136 labeled
	I0531 19:02:24.320158   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:24.823282   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:24.887476   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:25.323010   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:25.386908   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:25.823548   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:25.882869   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:26.322761   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:26.381842   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:26.822649   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:26.886468   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:27.323111   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:27.383668   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:27.823593   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:27.883575   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:28.322841   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:28.382566   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:28.822755   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:28.884399   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:29.322728   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:29.387941   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:29.823591   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:29.883831   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:30.322791   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:30.387029   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:30.822571   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:30.882396   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:31.322769   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:31.383502   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:31.823523   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:31.884271   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:32.322836   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:32.383412   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:32.823586   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:32.888491   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:33.323115   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:33.385993   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:33.822663   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:33.887953   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:34.322919   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:34.385951   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:34.823007   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:34.890695   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:35.323362   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:35.388781   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:35.823444   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:35.885250   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:36.322625   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:36.387715   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:36.823354   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:36.888827   97386 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0531 19:02:37.323448   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 19:02:37.387148   97386 command_runner.go:130] > NAME      SECRETS   AGE
	I0531 19:02:37.387171   97386 command_runner.go:130] > default   0         1s
	I0531 19:02:37.387198   97386 kubeadm.go:1076] duration metric: took 13.219998108s to wait for elevateKubeSystemPrivileges.
	I0531 19:02:37.387220   97386 kubeadm.go:406] StartCluster complete in 23.23698144s
	I0531 19:02:37.387243   97386 settings.go:142] acquiring lock: {Name:mk168872ecacf1e04453fffdd7073a8caed6462b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:02:37.387313   97386 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:02:37.387963   97386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/kubeconfig: {Name:mk2e9ef864ed1e4aaf9a6e1bd97970840e57fe82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:02:37.388374   97386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 19:02:37.388431   97386 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0531 19:02:37.388521   97386 addons.go:66] Setting storage-provisioner=true in profile "multinode-697136"
	I0531 19:02:37.388546   97386 addons.go:228] Setting addon storage-provisioner=true in "multinode-697136"
	I0531 19:02:37.388566   97386 config.go:182] Loaded profile config "multinode-697136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:02:37.388575   97386 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:02:37.388605   97386 host.go:66] Checking if "multinode-697136" exists ...
	I0531 19:02:37.388621   97386 addons.go:66] Setting default-storageclass=true in profile "multinode-697136"
	I0531 19:02:37.388636   97386 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-697136"
	I0531 19:02:37.388874   97386 kapi.go:59] client config for multinode-697136: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.key", CAFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b95a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:02:37.389092   97386 cli_runner.go:164] Run: docker container inspect multinode-697136 --format={{.State.Status}}
	I0531 19:02:37.388968   97386 cli_runner.go:164] Run: docker container inspect multinode-697136 --format={{.State.Status}}
	I0531 19:02:37.389845   97386 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0531 19:02:37.389860   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:37.389870   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:37.389879   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:37.389967   97386 cert_rotation.go:137] Starting client certificate rotation controller
	I0531 19:02:37.401150   97386 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0531 19:02:37.401175   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:37.401183   97386 round_trippers.go:580]     Audit-Id: ea53c528-8283-4276-bbb1-df03e44149c3
	I0531 19:02:37.401188   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:37.401194   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:37.401199   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:37.401205   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:37.401211   97386 round_trippers.go:580]     Content-Length: 291
	I0531 19:02:37.401217   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:37 GMT
	I0531 19:02:37.401245   97386 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"563d0303-a933-47e8-b089-4856a60f52d0","resourceVersion":"346","creationTimestamp":"2023-05-31T19:02:23Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0531 19:02:37.401675   97386 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"563d0303-a933-47e8-b089-4856a60f52d0","resourceVersion":"346","creationTimestamp":"2023-05-31T19:02:23Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0531 19:02:37.401743   97386 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0531 19:02:37.401755   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:37.401766   97386 round_trippers.go:473]     Content-Type: application/json
	I0531 19:02:37.401780   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:37.401794   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:37.410547   97386 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:02:37.408825   97386 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:02:37.408862   97386 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 19:02:37.410665   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:37.410681   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:37.410694   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:37.410704   97386 round_trippers.go:580]     Content-Length: 291
	I0531 19:02:37.410715   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:37 GMT
	I0531 19:02:37.410725   97386 round_trippers.go:580]     Audit-Id: b09f6eec-424b-43fb-b3ba-c6047faf9f5b
	I0531 19:02:37.410736   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:37.410744   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:37.410772   97386 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"563d0303-a933-47e8-b089-4856a60f52d0","resourceVersion":"347","creationTimestamp":"2023-05-31T19:02:23Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0531 19:02:37.412844   97386 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:02:37.412864   97386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 19:02:37.412914   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:02:37.411006   97386 kapi.go:59] client config for multinode-697136: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.key", CAFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b95a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:02:37.413334   97386 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0531 19:02:37.413358   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:37.413369   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:37.413378   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:37.416471   97386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:02:37.416493   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:37.416503   97386 round_trippers.go:580]     Audit-Id: 6691018e-4c07-472b-b122-1ee9795bc3b0
	I0531 19:02:37.416513   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:37.416526   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:37.416536   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:37.416549   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:37.416561   97386 round_trippers.go:580]     Content-Length: 109
	I0531 19:02:37.416574   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:37 GMT
	I0531 19:02:37.416595   97386 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"348"},"items":[]}
	I0531 19:02:37.416872   97386 addons.go:228] Setting addon default-storageclass=true in "multinode-697136"
	I0531 19:02:37.416912   97386 host.go:66] Checking if "multinode-697136" exists ...
	I0531 19:02:37.417386   97386 cli_runner.go:164] Run: docker container inspect multinode-697136 --format={{.State.Status}}
	I0531 19:02:37.433772   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa Username:docker}
	I0531 19:02:37.439967   97386 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 19:02:37.439992   97386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 19:02:37.440040   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:02:37.461462   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa Username:docker}
	I0531 19:02:37.543930   97386 command_runner.go:130] > apiVersion: v1
	I0531 19:02:37.543997   97386 command_runner.go:130] > data:
	I0531 19:02:37.544013   97386 command_runner.go:130] >   Corefile: |
	I0531 19:02:37.544027   97386 command_runner.go:130] >     .:53 {
	I0531 19:02:37.544042   97386 command_runner.go:130] >         errors
	I0531 19:02:37.544056   97386 command_runner.go:130] >         health {
	I0531 19:02:37.544071   97386 command_runner.go:130] >            lameduck 5s
	I0531 19:02:37.544091   97386 command_runner.go:130] >         }
	I0531 19:02:37.544106   97386 command_runner.go:130] >         ready
	I0531 19:02:37.544125   97386 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0531 19:02:37.544139   97386 command_runner.go:130] >            pods insecure
	I0531 19:02:37.544155   97386 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0531 19:02:37.544171   97386 command_runner.go:130] >            ttl 30
	I0531 19:02:37.544191   97386 command_runner.go:130] >         }
	I0531 19:02:37.544210   97386 command_runner.go:130] >         prometheus :9153
	I0531 19:02:37.544225   97386 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0531 19:02:37.544241   97386 command_runner.go:130] >            max_concurrent 1000
	I0531 19:02:37.544255   97386 command_runner.go:130] >         }
	I0531 19:02:37.544269   97386 command_runner.go:130] >         cache 30
	I0531 19:02:37.544289   97386 command_runner.go:130] >         loop
	I0531 19:02:37.544320   97386 command_runner.go:130] >         reload
	I0531 19:02:37.544335   97386 command_runner.go:130] >         loadbalance
	I0531 19:02:37.544348   97386 command_runner.go:130] >     }
	I0531 19:02:37.544362   97386 command_runner.go:130] > kind: ConfigMap
	I0531 19:02:37.544376   97386 command_runner.go:130] > metadata:
	I0531 19:02:37.544406   97386 command_runner.go:130] >   creationTimestamp: "2023-05-31T19:02:23Z"
	I0531 19:02:37.544420   97386 command_runner.go:130] >   name: coredns
	I0531 19:02:37.544434   97386 command_runner.go:130] >   namespace: kube-system
	I0531 19:02:37.544449   97386 command_runner.go:130] >   resourceVersion: "221"
	I0531 19:02:37.544464   97386 command_runner.go:130] >   uid: 4ef5b4ab-fad7-4fce-b2c4-918249ad2c7b
	I0531 19:02:37.548063   97386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 19:02:37.562463   97386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:02:37.663426   97386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 19:02:37.911875   97386 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0531 19:02:37.911898   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:37.911906   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:37.911912   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:37.945038   97386 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0531 19:02:37.945066   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:37.945084   97386 round_trippers.go:580]     Content-Length: 291
	I0531 19:02:37.945093   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:37 GMT
	I0531 19:02:37.945102   97386 round_trippers.go:580]     Audit-Id: 3908ae03-1ee1-4d8d-be03-217596bca0e6
	I0531 19:02:37.945111   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:37.945120   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:37.945128   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:37.945137   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:37.945166   97386 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"563d0303-a933-47e8-b089-4856a60f52d0","resourceVersion":"357","creationTimestamp":"2023-05-31T19:02:23Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0531 19:02:37.945302   97386 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-697136" context rescaled to 1 replicas
	I0531 19:02:37.945332   97386 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 19:02:37.948417   97386 out.go:177] * Verifying Kubernetes components...
	I0531 19:02:37.953100   97386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:02:38.248597   97386 command_runner.go:130] > configmap/coredns replaced
	I0531 19:02:38.253403   97386 start.go:916] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0531 19:02:38.490497   97386 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0531 19:02:38.495691   97386 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0531 19:02:38.502782   97386 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0531 19:02:38.510681   97386 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0531 19:02:38.517619   97386 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0531 19:02:38.552601   97386 command_runner.go:130] > pod/storage-provisioner created
	I0531 19:02:38.557709   97386 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0531 19:02:38.559794   97386 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 19:02:38.558179   97386 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:02:38.561639   97386 addons.go:499] enable addons completed in 1.173211759s: enabled=[storage-provisioner default-storageclass]
	I0531 19:02:38.560073   97386 kapi.go:59] client config for multinode-697136: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.key", CAFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b95a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:02:38.561917   97386 node_ready.go:35] waiting up to 6m0s for node "multinode-697136" to be "Ready" ...
	I0531 19:02:38.562008   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:38.562018   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:38.562027   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:38.562038   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:38.564079   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:38.564101   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:38.564109   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:38 GMT
	I0531 19:02:38.564115   97386 round_trippers.go:580]     Audit-Id: a49301e7-ef57-4b3f-89b7-a87b01cf2571
	I0531 19:02:38.564121   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:38.564128   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:38.564137   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:38.564150   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:38.564337   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:39.065605   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:39.065634   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:39.065646   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:39.065655   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:39.068035   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:39.068062   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:39.068073   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:39.068081   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:39.068089   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:39.068097   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:39.068105   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:39 GMT
	I0531 19:02:39.068114   97386 round_trippers.go:580]     Audit-Id: 6abd634c-27f0-4d83-9032-e24f0ca9e496
	I0531 19:02:39.068270   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:39.565937   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:39.565958   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:39.565966   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:39.565972   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:39.568318   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:39.568343   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:39.568353   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:39.568361   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:39.568370   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:39 GMT
	I0531 19:02:39.568378   97386 round_trippers.go:580]     Audit-Id: 13481d3c-9b4a-4334-826a-88e15a015321
	I0531 19:02:39.568391   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:39.568402   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:39.568502   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:40.065087   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:40.065109   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:40.065118   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:40.065124   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:40.067551   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:40.067576   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:40.067585   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:40.067594   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:40 GMT
	I0531 19:02:40.067603   97386 round_trippers.go:580]     Audit-Id: 0f21a712-977b-4f8c-91e7-a0c10adc6de5
	I0531 19:02:40.067612   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:40.067620   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:40.067628   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:40.067752   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:40.565241   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:40.565261   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:40.565270   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:40.565276   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:40.569229   97386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:02:40.569254   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:40.569261   97386 round_trippers.go:580]     Audit-Id: 26294953-0c41-47b0-b5e2-c3211bf7b7e8
	I0531 19:02:40.569269   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:40.569276   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:40.569284   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:40.569292   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:40.569300   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:40 GMT
	I0531 19:02:40.569417   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:40.569743   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:02:41.065513   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:41.065531   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:41.065539   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:41.065547   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:41.067887   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:41.067905   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:41.067912   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:41.067918   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:41.067925   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:41 GMT
	I0531 19:02:41.067933   97386 round_trippers.go:580]     Audit-Id: 544c1745-b2f8-48e8-b9c4-97ddae8e32cb
	I0531 19:02:41.067941   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:41.067951   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:41.068078   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:41.565581   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:41.565604   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:41.565616   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:41.565623   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:41.568290   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:41.568334   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:41.568346   97386 round_trippers.go:580]     Audit-Id: 45adf032-8f24-4719-9c3c-9d9000c76b0c
	I0531 19:02:41.568354   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:41.568362   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:41.568371   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:41.568380   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:41.568390   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:41 GMT
	I0531 19:02:41.568515   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:42.065058   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:42.065081   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:42.065093   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:42.065103   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:42.067135   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:42.067157   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:42.067167   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:42 GMT
	I0531 19:02:42.067175   97386 round_trippers.go:580]     Audit-Id: 79a37e40-6f30-41ad-9048-fa97e57f3d62
	I0531 19:02:42.067184   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:42.067194   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:42.067207   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:42.067219   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:42.067353   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:42.565558   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:42.565580   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:42.565588   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:42.565595   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:42.567902   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:42.567932   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:42.567946   97386 round_trippers.go:580]     Audit-Id: d26e0155-35a2-4b9b-ae51-1b5ca702b8d2
	I0531 19:02:42.567956   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:42.567966   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:42.567978   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:42.567992   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:42.567999   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:42 GMT
	I0531 19:02:42.568110   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:43.065575   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:43.065596   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:43.065608   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:43.065616   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:43.067907   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:43.067931   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:43.067942   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:43.067951   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:43 GMT
	I0531 19:02:43.067959   97386 round_trippers.go:580]     Audit-Id: 0c5e0c3b-41d8-41f0-801d-02cd2931bcb6
	I0531 19:02:43.067967   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:43.067976   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:43.067981   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:43.068083   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:43.068421   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:02:43.565583   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:43.565605   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:43.565613   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:43.565620   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:43.567772   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:43.567790   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:43.567797   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:43.567806   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:43.567815   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:43.567826   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:43.567838   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:43 GMT
	I0531 19:02:43.567854   97386 round_trippers.go:580]     Audit-Id: e82ee03a-226f-4401-9ce4-e13bb9fbc39c
	I0531 19:02:43.568021   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:44.065563   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:44.065587   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:44.065599   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:44.065608   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:44.067865   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:44.067894   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:44.067901   97386 round_trippers.go:580]     Audit-Id: f4480eee-4078-44fb-8b8f-30a78efa3739
	I0531 19:02:44.067907   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:44.067913   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:44.067918   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:44.067926   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:44.067934   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:44 GMT
	I0531 19:02:44.068095   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:44.564970   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:44.564996   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:44.565007   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:44.565017   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:44.567475   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:44.567495   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:44.567502   97386 round_trippers.go:580]     Audit-Id: eb035e92-c7ac-4708-94ee-2add9b22bd67
	I0531 19:02:44.567508   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:44.567513   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:44.567520   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:44.567528   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:44.567536   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:44 GMT
	I0531 19:02:44.567666   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:45.065302   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:45.065324   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:45.065339   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:45.065348   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:45.067606   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:45.067631   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:45.067640   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:45.067648   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:45 GMT
	I0531 19:02:45.067657   97386 round_trippers.go:580]     Audit-Id: c890db47-2f3a-4225-bdfa-79fc32a0f700
	I0531 19:02:45.067665   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:45.067678   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:45.067687   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:45.067796   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:45.565355   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:45.565376   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:45.565384   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:45.565390   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:45.567697   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:45.567718   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:45.567725   97386 round_trippers.go:580]     Audit-Id: aa460e0c-bef6-40ee-a4fe-40a451e586a9
	I0531 19:02:45.567731   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:45.567737   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:45.567745   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:45.567753   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:45.567761   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:45 GMT
	I0531 19:02:45.567895   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:45.568209   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:02:46.065057   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:46.065079   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:46.065087   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:46.065096   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:46.067436   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:46.067454   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:46.067461   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:46.067467   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:46.067472   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:46.067478   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:46 GMT
	I0531 19:02:46.067484   97386 round_trippers.go:580]     Audit-Id: 7ce495f6-22aa-487d-ae85-cae96ddf0952
	I0531 19:02:46.067489   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:46.067631   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:46.565271   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:46.565300   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:46.565308   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:46.565314   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:46.567625   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:46.567650   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:46.567659   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:46.567667   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:46.567675   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:46.567684   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:46 GMT
	I0531 19:02:46.567701   97386 round_trippers.go:580]     Audit-Id: 285495d7-6daa-41b6-bbc9-aa21dd66452a
	I0531 19:02:46.567709   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:46.567802   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:47.065045   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:47.065064   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:47.065072   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:47.065078   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:47.067330   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:47.067347   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:47.067354   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:47.067360   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:47.067378   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:47 GMT
	I0531 19:02:47.067384   97386 round_trippers.go:580]     Audit-Id: ebcde354-0f96-47e5-8d94-7a7b2c314959
	I0531 19:02:47.067389   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:47.067397   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:47.067542   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:47.565127   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:47.565146   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:47.565154   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:47.565160   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:47.567313   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:47.567337   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:47.567347   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:47.567356   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:47.567364   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:47.567373   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:47.567382   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:47 GMT
	I0531 19:02:47.567393   97386 round_trippers.go:580]     Audit-Id: 87151026-85a2-48d0-90a6-1309ab6acee4
	I0531 19:02:47.567512   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:48.065065   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:48.065085   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:48.065093   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:48.065100   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:48.067270   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:48.067288   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:48.067294   97386 round_trippers.go:580]     Audit-Id: 8e4ddfe5-bf21-4a90-adb9-50f99c29c75b
	I0531 19:02:48.067300   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:48.067308   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:48.067317   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:48.067327   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:48.067338   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:48 GMT
	I0531 19:02:48.067505   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:48.067796   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:02:48.565037   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:48.565058   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:48.565066   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:48.565072   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:48.567405   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:48.567419   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:48.567426   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:48.567432   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:48 GMT
	I0531 19:02:48.567437   97386 round_trippers.go:580]     Audit-Id: 16415fda-79a7-4df6-9181-82ac9d91ac03
	I0531 19:02:48.567443   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:48.567450   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:48.567459   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:48.567617   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:49.065172   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:49.065203   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:49.065211   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:49.065217   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:49.067645   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:49.067688   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:49.067700   97386 round_trippers.go:580]     Audit-Id: 68de1188-099f-4926-9d88-46fc42ae0faa
	I0531 19:02:49.067710   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:49.067720   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:49.067733   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:49.067742   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:49.067751   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:49 GMT
	I0531 19:02:49.067900   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:49.565578   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:49.565603   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:49.565615   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:49.565624   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:49.567832   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:49.567860   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:49.567870   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:49.567878   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:49.567885   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:49.567894   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:49 GMT
	I0531 19:02:49.567903   97386 round_trippers.go:580]     Audit-Id: aa17d482-19a3-45ee-80b6-0661b5a2e619
	I0531 19:02:49.567915   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:49.568023   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:50.065555   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:50.065576   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:50.065584   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:50.065590   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:50.067991   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:50.068016   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:50.068027   97386 round_trippers.go:580]     Audit-Id: c888351a-f396-43e8-a9f1-c65d63ba5f0a
	I0531 19:02:50.068036   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:50.068045   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:50.068057   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:50.068069   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:50.068079   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:50 GMT
	I0531 19:02:50.068202   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:50.068550   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:02:50.565518   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:50.565538   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:50.565546   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:50.565553   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:50.568024   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:50.568042   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:50.568049   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:50 GMT
	I0531 19:02:50.568054   97386 round_trippers.go:580]     Audit-Id: 9eb3040f-2c7b-46b1-a9ff-a4df0254a7dd
	I0531 19:02:50.568059   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:50.568065   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:50.568070   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:50.568076   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:50.568189   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:51.065514   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:51.065534   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:51.065543   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:51.065549   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:51.067788   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:51.067812   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:51.067821   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:51.067830   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:51.067839   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:51.067847   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:51.067858   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:51 GMT
	I0531 19:02:51.067870   97386 round_trippers.go:580]     Audit-Id: 92e2f86a-84de-4a70-acdc-01f3ac20547b
	I0531 19:02:51.067986   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:51.565510   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:51.565527   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:51.565535   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:51.565541   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:51.567697   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:51.567722   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:51.567732   97386 round_trippers.go:580]     Audit-Id: bc1cd81b-c43a-4861-a676-fe7d63d22dee
	I0531 19:02:51.567740   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:51.567749   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:51.567758   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:51.567769   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:51.567781   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:51 GMT
	I0531 19:02:51.567877   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:52.065460   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:52.065484   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:52.065493   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:52.065503   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:52.067506   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:02:52.067522   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:52.067529   97386 round_trippers.go:580]     Audit-Id: 1ef03661-1b02-4d9f-8825-1831d4644cc4
	I0531 19:02:52.067538   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:52.067546   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:52.067555   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:52.067568   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:52.067577   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:52 GMT
	I0531 19:02:52.067690   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:52.565207   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:52.565229   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:52.565237   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:52.565243   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:52.567602   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:52.567625   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:52.567635   97386 round_trippers.go:580]     Audit-Id: ba3a6e2c-0c0b-4606-afb9-e2b938b57968
	I0531 19:02:52.567645   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:52.567660   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:52.567669   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:52.567678   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:52.567686   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:52 GMT
	I0531 19:02:52.567797   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:52.568201   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:02:53.065010   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:53.065032   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:53.065040   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:53.065046   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:53.067390   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:53.067411   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:53.067421   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:53.067431   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:53 GMT
	I0531 19:02:53.067440   97386 round_trippers.go:580]     Audit-Id: 61726a2b-e729-452c-a4bf-2ed434988e1c
	I0531 19:02:53.067449   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:53.067459   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:53.067472   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:53.067600   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:53.565130   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:53.565150   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:53.565158   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:53.565165   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:53.567604   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:53.567627   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:53.567637   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:53.567645   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:53.567653   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:53.567663   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:53 GMT
	I0531 19:02:53.567681   97386 round_trippers.go:580]     Audit-Id: 11859399-2744-43b6-8deb-8becce9a7d95
	I0531 19:02:53.567694   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:53.567787   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:54.065322   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:54.065353   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:54.065361   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:54.065367   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:54.067707   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:54.067728   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:54.067734   97386 round_trippers.go:580]     Audit-Id: 65ebbf0b-fe5f-4071-9871-4879c3130c51
	I0531 19:02:54.067740   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:54.067751   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:54.067757   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:54.067764   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:54.067772   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:54 GMT
	I0531 19:02:54.067900   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:54.565851   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:54.565872   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:54.565883   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:54.565892   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:54.568223   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:54.568249   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:54.568259   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:54.568268   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:54.568274   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:54.568279   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:54 GMT
	I0531 19:02:54.568285   97386 round_trippers.go:580]     Audit-Id: f03b5d7d-00d0-4543-938f-cdcdea470a49
	I0531 19:02:54.568306   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:54.568417   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:54.568744   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:02:55.065545   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:55.065571   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:55.065580   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:55.065588   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:55.067862   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:55.067885   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:55.067895   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:55.067904   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:55.067935   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:55.067951   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:55.067963   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:55 GMT
	I0531 19:02:55.067974   97386 round_trippers.go:580]     Audit-Id: cc872408-a161-4da2-9abc-4e78d175d5eb
	I0531 19:02:55.068103   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:55.565643   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:55.565661   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:55.565669   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:55.565676   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:55.567759   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:55.567776   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:55.567783   97386 round_trippers.go:580]     Audit-Id: 164df11d-22b5-4079-9c39-ac93d10bd18e
	I0531 19:02:55.567788   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:55.567793   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:55.567799   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:55.567804   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:55.567810   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:55 GMT
	I0531 19:02:55.567919   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:56.065611   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:56.065635   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:56.065643   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:56.065649   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:56.068260   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:56.068279   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:56.068286   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:56.068305   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:56.068314   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:56.068325   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:56.068332   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:56 GMT
	I0531 19:02:56.068340   97386 round_trippers.go:580]     Audit-Id: 9519c949-84fd-4595-9785-aa1c76c789bf
	I0531 19:02:56.068497   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:56.565080   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:56.565105   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:56.565115   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:56.565122   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:56.567472   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:56.567492   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:56.567502   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:56 GMT
	I0531 19:02:56.567510   97386 round_trippers.go:580]     Audit-Id: ae231cc3-4327-4843-b13b-599d70f82525
	I0531 19:02:56.567517   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:56.567526   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:56.567535   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:56.567551   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:56.567672   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:57.065290   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:57.065313   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:57.065323   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:57.065330   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:57.067642   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:57.067662   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:57.067670   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:57.067676   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:57.067681   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:57 GMT
	I0531 19:02:57.067687   97386 round_trippers.go:580]     Audit-Id: b8ea6668-ebcb-40f6-b467-2a9746572201
	I0531 19:02:57.067692   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:57.067697   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:57.067877   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:57.068185   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:02:57.565764   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:57.565784   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:57.565792   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:57.565798   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:57.568093   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:57.568116   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:57.568126   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:57.568135   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:57.568143   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:57.568151   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:57 GMT
	I0531 19:02:57.568164   97386 round_trippers.go:580]     Audit-Id: 573edd69-e897-441b-857e-eee9f130a932
	I0531 19:02:57.568176   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:57.568289   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:58.065523   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:58.065546   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:58.065557   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:58.065568   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:58.067790   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:58.067815   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:58.067825   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:58 GMT
	I0531 19:02:58.067834   97386 round_trippers.go:580]     Audit-Id: bb7d5b4c-71ad-496f-aab9-b2b16b3773fa
	I0531 19:02:58.067842   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:58.067854   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:58.067867   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:58.067882   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:58.067987   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:58.565632   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:58.565657   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:58.565669   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:58.565679   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:58.567906   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:58.567924   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:58.567931   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:58.567937   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:58 GMT
	I0531 19:02:58.567944   97386 round_trippers.go:580]     Audit-Id: dbfdfcdd-8590-475f-932f-5d5275697576
	I0531 19:02:58.567950   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:58.567955   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:58.567961   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:58.568078   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:59.065557   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:59.065577   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:59.065584   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:59.065591   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:59.067910   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:59.067934   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:59.067944   97386 round_trippers.go:580]     Audit-Id: 65815cb4-61c9-45d5-b7b4-808cf20b1113
	I0531 19:02:59.067952   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:59.067960   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:59.067968   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:59.067980   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:59.067988   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:59 GMT
	I0531 19:02:59.068117   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:02:59.068621   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:02:59.565948   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:02:59.565972   97386 round_trippers.go:469] Request Headers:
	I0531 19:02:59.565983   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:02:59.565993   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:02:59.568161   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:02:59.568181   97386 round_trippers.go:577] Response Headers:
	I0531 19:02:59.568191   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:02:59.568198   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:02:59 GMT
	I0531 19:02:59.568207   97386 round_trippers.go:580]     Audit-Id: 2862defb-f77f-4714-9755-c5ddf7ebdcf9
	I0531 19:02:59.568214   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:02:59.568223   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:02:59.568236   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:02:59.568369   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:00.064990   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:00.065013   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:00.065021   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:00.065034   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:00.067480   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:00.067500   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:00.067507   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:00.067513   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:00.067518   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:00.067524   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:00 GMT
	I0531 19:03:00.067535   97386 round_trippers.go:580]     Audit-Id: 687e57c7-e467-40b1-8501-f0c2603aa955
	I0531 19:03:00.067545   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:00.067741   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:00.565071   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:00.565093   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:00.565101   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:00.565107   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:00.567246   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:00.567278   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:00.567289   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:00.567299   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:00 GMT
	I0531 19:03:00.567308   97386 round_trippers.go:580]     Audit-Id: ddc1ba1f-a566-404c-95f4-12951a44a79e
	I0531 19:03:00.567321   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:00.567332   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:00.567344   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:00.567457   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:01.065532   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:01.065553   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:01.065560   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:01.065567   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:01.067879   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:01.067904   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:01.067914   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:01.067924   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:01.067933   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:01.067942   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:01.067950   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:01 GMT
	I0531 19:03:01.067956   97386 round_trippers.go:580]     Audit-Id: a92add45-4432-491e-a612-3cf3f6223ae5
	I0531 19:03:01.068075   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:01.565557   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:01.565582   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:01.565593   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:01.565603   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:01.567797   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:01.567821   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:01.567831   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:01.567839   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:01.567849   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:01.567866   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:01 GMT
	I0531 19:03:01.567875   97386 round_trippers.go:580]     Audit-Id: e2749842-ac45-4012-8323-0bac29ff3350
	I0531 19:03:01.567887   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:01.568006   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:01.568446   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:03:02.065523   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:02.065540   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:02.065548   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:02.065556   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:02.067451   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:02.067469   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:02.067476   97386 round_trippers.go:580]     Audit-Id: 08f2ccd9-0bab-4856-a4a7-4b9371efaaa9
	I0531 19:03:02.067482   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:02.067487   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:02.067492   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:02.067498   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:02.067504   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:02 GMT
	I0531 19:03:02.067620   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:02.565148   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:02.565170   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:02.565178   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:02.565184   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:02.567690   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:02.567712   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:02.567721   97386 round_trippers.go:580]     Audit-Id: 3c243f3d-1648-464b-8e3b-752ff0419ad9
	I0531 19:03:02.567739   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:02.567751   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:02.567759   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:02.567771   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:02.567783   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:02 GMT
	I0531 19:03:02.567907   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:03.065559   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:03.065584   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:03.065592   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:03.065598   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:03.067920   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:03.067941   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:03.067950   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:03.067958   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:03 GMT
	I0531 19:03:03.067966   97386 round_trippers.go:580]     Audit-Id: 0210b536-fad4-474d-9f76-6a278ee1a678
	I0531 19:03:03.067973   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:03.067982   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:03.067995   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:03.068120   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:03.565547   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:03.565571   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:03.565579   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:03.565586   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:03.567923   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:03.567945   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:03.567954   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:03.567962   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:03.567970   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:03.567978   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:03.567990   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:03 GMT
	I0531 19:03:03.568004   97386 round_trippers.go:580]     Audit-Id: ba48a399-47aa-4a9e-b800-3777f2812f9c
	I0531 19:03:03.568127   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:03.568470   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:03:04.065765   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:04.065785   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:04.065793   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:04.065800   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:04.068222   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:04.068252   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:04.068262   97386 round_trippers.go:580]     Audit-Id: 137e9bf6-b125-48c2-9b4a-6bb91e67f067
	I0531 19:03:04.068270   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:04.068278   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:04.068286   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:04.068313   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:04.068327   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:04 GMT
	I0531 19:03:04.068462   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:04.565516   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:04.565537   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:04.565545   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:04.565551   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:04.567814   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:04.567833   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:04.567840   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:04 GMT
	I0531 19:03:04.567849   97386 round_trippers.go:580]     Audit-Id: 851a23a3-be27-4e63-8a04-04f68c07dbdd
	I0531 19:03:04.567858   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:04.567866   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:04.567874   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:04.567888   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:04.568000   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:05.065518   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:05.065537   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:05.065544   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:05.065551   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:05.067689   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:05.067707   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:05.067714   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:05.067719   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:05 GMT
	I0531 19:03:05.067725   97386 round_trippers.go:580]     Audit-Id: fe055df7-3939-4b0e-9cfd-3565f578eb92
	I0531 19:03:05.067731   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:05.067739   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:05.067745   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:05.067872   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:05.565523   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:05.565543   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:05.565551   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:05.565557   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:05.567755   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:05.567777   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:05.567786   97386 round_trippers.go:580]     Audit-Id: 819d3595-3ad0-41f1-96b0-6be8babaf219
	I0531 19:03:05.567795   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:05.567804   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:05.567816   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:05.567829   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:05.567842   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:05 GMT
	I0531 19:03:05.567949   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:06.065525   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:06.065544   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:06.065552   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:06.065558   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:06.067902   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:06.067923   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:06.067933   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:06 GMT
	I0531 19:03:06.067941   97386 round_trippers.go:580]     Audit-Id: a733d52c-30c6-42e7-847e-97032080f352
	I0531 19:03:06.067949   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:06.067956   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:06.067964   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:06.067974   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:06.068182   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:06.068521   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:03:06.565510   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:06.565532   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:06.565540   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:06.565546   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:06.567785   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:06.567810   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:06.567818   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:06 GMT
	I0531 19:03:06.567826   97386 round_trippers.go:580]     Audit-Id: 8b0a0763-cfce-4829-8188-c95612f8f7f5
	I0531 19:03:06.567838   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:06.567845   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:06.567853   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:06.567861   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:06.567967   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:07.065551   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:07.065572   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:07.065579   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:07.065586   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:07.067951   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:07.067968   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:07.067975   97386 round_trippers.go:580]     Audit-Id: c085e60f-7b26-4d82-b91a-873661b8068c
	I0531 19:03:07.067981   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:07.067988   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:07.067996   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:07.068005   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:07.068013   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:07 GMT
	I0531 19:03:07.068136   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:07.565968   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:07.565991   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:07.566005   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:07.566012   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:07.568236   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:07.568255   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:07.568266   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:07.568275   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:07.568284   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:07.568329   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:07 GMT
	I0531 19:03:07.568343   97386 round_trippers.go:580]     Audit-Id: f5d8be47-5282-467c-97c0-86ad03bcf845
	I0531 19:03:07.568352   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:07.568441   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:08.065539   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:08.065564   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:08.065576   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:08.065588   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:08.067854   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:08.067881   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:08.067892   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:08.067901   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:08.067911   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:08 GMT
	I0531 19:03:08.067922   97386 round_trippers.go:580]     Audit-Id: 3bd8ecde-7eff-4a98-a816-457d42c5e6c6
	I0531 19:03:08.067930   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:08.067938   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:08.068162   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:08.068581   97386 node_ready.go:58] node "multinode-697136" has status "Ready":"False"
	I0531 19:03:08.565570   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:08.565598   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:08.565608   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:08.565615   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:08.567994   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:08.568018   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:08.568029   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:08.568039   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:08.568047   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:08.568056   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:08.568067   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:08 GMT
	I0531 19:03:08.568079   97386 round_trippers.go:580]     Audit-Id: 2535686a-1688-4ba4-b5d1-b70d3d9072c7
	I0531 19:03:08.568194   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"321","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0531 19:03:09.065556   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:09.065585   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.065596   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.065604   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.068788   97386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:03:09.068863   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.068893   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.068944   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.068969   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.068984   97386 round_trippers.go:580]     Audit-Id: c7b842fa-c55f-4e11-994c-03b7b8d5c798
	I0531 19:03:09.068994   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.069030   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.069209   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:09.069693   97386 node_ready.go:49] node "multinode-697136" has status "Ready":"True"
	I0531 19:03:09.069717   97386 node_ready.go:38] duration metric: took 30.507769745s waiting for node "multinode-697136" to be "Ready" ...
	I0531 19:03:09.069729   97386 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:03:09.069820   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:03:09.069829   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.069839   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.069848   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.073844   97386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:03:09.073860   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.073867   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.073873   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.073878   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.073889   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.073897   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.073903   97386 round_trippers.go:580]     Audit-Id: a2c407b3-308c-4cf3-8ddb-ba118dd7db4c
	I0531 19:03:09.074296   97386 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"401"},"items":[{"metadata":{"name":"coredns-5d78c9869d-fntsv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3a603b3b-cd36-4c4e-9c48-272ebf4323ee","resourceVersion":"396","creationTimestamp":"2023-05-31T19:02:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"0c6b5e3a-feb8-476c-a469-98dd4afd483c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6b5e3a-feb8-476c-a469-98dd4afd483c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55533 chars]
	I0531 19:03:09.078544   97386 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-fntsv" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:09.078678   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-fntsv
	I0531 19:03:09.078700   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.078717   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.078734   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.080719   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:09.080755   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.080769   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.080782   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.080794   97386 round_trippers.go:580]     Audit-Id: 65a95c4c-bc83-4096-9d58-0cd6a5d0967e
	I0531 19:03:09.080805   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.080825   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.080846   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.080974   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-fntsv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3a603b3b-cd36-4c4e-9c48-272ebf4323ee","resourceVersion":"396","creationTimestamp":"2023-05-31T19:02:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"0c6b5e3a-feb8-476c-a469-98dd4afd483c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6b5e3a-feb8-476c-a469-98dd4afd483c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0531 19:03:09.081340   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:09.081373   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.081396   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.081408   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.083822   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:09.083842   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.083851   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.083860   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.083868   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.083886   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.083902   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.083915   97386 round_trippers.go:580]     Audit-Id: f84cdc98-f86a-494e-b6c0-c38becfe9a4a
	I0531 19:03:09.084066   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:09.584754   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-fntsv
	I0531 19:03:09.584774   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.584782   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.584788   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.587044   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:09.587069   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.587080   97386 round_trippers.go:580]     Audit-Id: b4a2e65b-8bee-49db-ad2a-02793191a841
	I0531 19:03:09.587090   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.587100   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.587111   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.587125   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.587138   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.587315   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-fntsv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3a603b3b-cd36-4c4e-9c48-272ebf4323ee","resourceVersion":"409","creationTimestamp":"2023-05-31T19:02:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"0c6b5e3a-feb8-476c-a469-98dd4afd483c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6b5e3a-feb8-476c-a469-98dd4afd483c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0531 19:03:09.587747   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:09.587765   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.587773   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.587780   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.589754   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:09.589775   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.589785   97386 round_trippers.go:580]     Audit-Id: f5056cfb-db0e-4382-acf3-0d26a539f496
	I0531 19:03:09.589793   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.589803   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.589813   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.589826   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.589839   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.589948   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:09.590243   97386 pod_ready.go:92] pod "coredns-5d78c9869d-fntsv" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:09.590261   97386 pod_ready.go:81] duration metric: took 511.662821ms waiting for pod "coredns-5d78c9869d-fntsv" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:09.590272   97386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:09.590357   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-697136
	I0531 19:03:09.590369   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.590379   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.590389   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.592109   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:09.592127   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.592133   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.592139   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.592144   97386 round_trippers.go:580]     Audit-Id: f1f10055-3416-4118-b918-1eb61a62b731
	I0531 19:03:09.592149   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.592154   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.592160   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.592328   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-697136","namespace":"kube-system","uid":"ccd089f5-6d2e-49be-a654-fab118994a39","resourceVersion":"283","creationTimestamp":"2023-05-31T19:02:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"66aaa6d6b901acd2f56b209f1f1672ea","kubernetes.io/config.mirror":"66aaa6d6b901acd2f56b209f1f1672ea","kubernetes.io/config.seen":"2023-05-31T19:02:23.306241078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0531 19:03:09.592769   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:09.592784   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.592793   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.592800   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.594444   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:09.594464   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.594474   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.594483   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.594495   97386 round_trippers.go:580]     Audit-Id: 119e820b-f653-468f-b810-3e1218d8550f
	I0531 19:03:09.594506   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.594517   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.594529   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.594652   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:09.594910   97386 pod_ready.go:92] pod "etcd-multinode-697136" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:09.594921   97386 pod_ready.go:81] duration metric: took 4.642719ms waiting for pod "etcd-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:09.594931   97386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:09.594969   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-697136
	I0531 19:03:09.594976   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.594982   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.594988   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.596683   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:09.596705   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.596713   97386 round_trippers.go:580]     Audit-Id: a14c10ef-45a7-4e24-80ec-5a0b3b5cc061
	I0531 19:03:09.596719   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.596725   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.596733   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.596738   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.596747   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.596902   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-697136","namespace":"kube-system","uid":"2b24f348-410a-4de9-9d78-a304f5a20e2f","resourceVersion":"258","creationTimestamp":"2023-05-31T19:02:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"d1e9e3f7f9a77751cd5d1911c06d4265","kubernetes.io/config.mirror":"d1e9e3f7f9a77751cd5d1911c06d4265","kubernetes.io/config.seen":"2023-05-31T19:02:23.306242754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0531 19:03:09.597271   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:09.597282   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.597289   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.597295   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.598864   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:09.598888   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.598898   97386 round_trippers.go:580]     Audit-Id: 5b1e631a-bb4c-4d1f-8d81-fd66f9f9e9e2
	I0531 19:03:09.598911   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.598924   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.598936   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.598945   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.598958   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.599049   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:09.599314   97386 pod_ready.go:92] pod "kube-apiserver-multinode-697136" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:09.599328   97386 pod_ready.go:81] duration metric: took 4.391579ms waiting for pod "kube-apiserver-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:09.599338   97386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:09.599383   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-697136
	I0531 19:03:09.599392   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.599401   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.599415   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.601052   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:09.601070   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.601080   97386 round_trippers.go:580]     Audit-Id: cbd21261-cd65-4a9d-9ce1-85a073c8411a
	I0531 19:03:09.601092   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.601106   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.601115   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.601127   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.601139   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.601244   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-697136","namespace":"kube-system","uid":"b6cb9f23-df26-4062-b101-d862a5798d37","resourceVersion":"274","creationTimestamp":"2023-05-31T19:02:23Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cbf13c18b5a0e9a44dd9b79914da83aa","kubernetes.io/config.mirror":"cbf13c18b5a0e9a44dd9b79914da83aa","kubernetes.io/config.seen":"2023-05-31T19:02:23.306243874Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0531 19:03:09.601625   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:09.601637   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.601644   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.601652   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.603401   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:09.603415   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.603425   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.603434   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.603447   97386 round_trippers.go:580]     Audit-Id: c8ff0d76-b27f-476e-a90d-29079c2c8aa5
	I0531 19:03:09.603456   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.603465   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.603480   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.603582   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:09.603934   97386 pod_ready.go:92] pod "kube-controller-manager-multinode-697136" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:09.603950   97386 pod_ready.go:81] duration metric: took 4.604267ms waiting for pod "kube-controller-manager-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:09.603963   97386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tgk57" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:09.666216   97386 request.go:628] Waited for 62.189889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgk57
	I0531 19:03:09.666276   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgk57
	I0531 19:03:09.666283   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.666291   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.666300   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.668571   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:09.668589   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.668596   97386 round_trippers.go:580]     Audit-Id: 19b2491b-6c74-47cb-b681-927a15265652
	I0531 19:03:09.668602   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.668608   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.668614   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.668622   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.668627   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.668768   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tgk57","generateName":"kube-proxy-","namespace":"kube-system","uid":"47badf8b-17e5-49d3-bdde-743b58a05b7d","resourceVersion":"367","creationTimestamp":"2023-05-31T19:02:37Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"25b43a82-6e41-4a6d-abee-90da0dfec603","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25b43a82-6e41-4a6d-abee-90da0dfec603\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5508 chars]
	I0531 19:03:09.866594   97386 request.go:628] Waited for 197.371992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:09.866649   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:09.866655   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:09.866667   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:09.866684   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:09.868969   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:09.868989   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:09.868999   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:09.869006   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:09.869014   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:09.869022   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:09 GMT
	I0531 19:03:09.869031   97386 round_trippers.go:580]     Audit-Id: 132440a8-ab7c-4e88-be35-504ca5932074
	I0531 19:03:09.869041   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:09.869150   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:09.869457   97386 pod_ready.go:92] pod "kube-proxy-tgk57" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:09.869473   97386 pod_ready.go:81] duration metric: took 265.49822ms waiting for pod "kube-proxy-tgk57" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:09.869485   97386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:10.065923   97386 request.go:628] Waited for 196.374519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-697136
	I0531 19:03:10.065975   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-697136
	I0531 19:03:10.065994   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:10.066003   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:10.066013   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:10.068272   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:10.068315   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:10.068327   97386 round_trippers.go:580]     Audit-Id: bb594577-60ef-4d97-afdc-d74989de50fb
	I0531 19:03:10.068336   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:10.068344   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:10.068354   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:10.068367   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:10.068378   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:10 GMT
	I0531 19:03:10.068486   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-697136","namespace":"kube-system","uid":"e6c0e63a-e8fc-4aea-b1f0-573963eb4ad9","resourceVersion":"290","creationTimestamp":"2023-05-31T19:02:23Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e10d631853cc4f1206606e3c2f5048c1","kubernetes.io/config.mirror":"e10d631853cc4f1206606e3c2f5048c1","kubernetes.io/config.seen":"2023-05-31T19:02:23.306232698Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0531 19:03:10.266218   97386 request.go:628] Waited for 197.363141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:10.266284   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:10.266294   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:10.266305   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:10.266314   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:10.268717   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:10.268736   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:10.268742   97386 round_trippers.go:580]     Audit-Id: f7ef1004-591d-4369-b79f-d027973925ce
	I0531 19:03:10.268748   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:10.268753   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:10.268758   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:10.268764   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:10.268772   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:10 GMT
	I0531 19:03:10.268883   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:10.269204   97386 pod_ready.go:92] pod "kube-scheduler-multinode-697136" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:10.269219   97386 pod_ready.go:81] duration metric: took 399.726903ms waiting for pod "kube-scheduler-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:10.269228   97386 pod_ready.go:38] duration metric: took 1.199484304s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:03:10.269241   97386 api_server.go:52] waiting for apiserver process to appear ...
	I0531 19:03:10.269283   97386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:03:10.278755   97386 command_runner.go:130] > 1443
	I0531 19:03:10.279496   97386 api_server.go:72] duration metric: took 32.334130498s to wait for apiserver process to appear ...
	I0531 19:03:10.279516   97386 api_server.go:88] waiting for apiserver healthz status ...
	I0531 19:03:10.279534   97386 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 19:03:10.283686   97386 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 19:03:10.283740   97386 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0531 19:03:10.283750   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:10.283764   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:10.283777   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:10.284700   97386 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0531 19:03:10.284715   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:10.284722   97386 round_trippers.go:580]     Audit-Id: 77d2212c-e94a-4318-a9ba-311dd3c156ce
	I0531 19:03:10.284727   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:10.284733   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:10.284738   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:10.284746   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:10.284754   97386 round_trippers.go:580]     Content-Length: 263
	I0531 19:03:10.284759   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:10 GMT
	I0531 19:03:10.284775   97386 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.2",
	  "gitCommit": "7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647",
	  "gitTreeState": "clean",
	  "buildDate": "2023-05-17T14:13:28Z",
	  "goVersion": "go1.20.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0531 19:03:10.284843   97386 api_server.go:141] control plane version: v1.27.2
	I0531 19:03:10.284856   97386 api_server.go:131] duration metric: took 5.335054ms to wait for apiserver health ...
	I0531 19:03:10.284862   97386 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:03:10.466239   97386 request.go:628] Waited for 181.318878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:03:10.466295   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:03:10.466299   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:10.466307   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:10.466319   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:10.470035   97386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:03:10.470057   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:10.470067   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:10.470076   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:10.470084   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:10.470090   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:10.470096   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:10 GMT
	I0531 19:03:10.470101   97386 round_trippers.go:580]     Audit-Id: c64cc70e-0bce-44a7-b5dc-e79455b1a70b
	I0531 19:03:10.470499   97386 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"coredns-5d78c9869d-fntsv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3a603b3b-cd36-4c4e-9c48-272ebf4323ee","resourceVersion":"409","creationTimestamp":"2023-05-31T19:02:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"0c6b5e3a-feb8-476c-a469-98dd4afd483c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6b5e3a-feb8-476c-a469-98dd4afd483c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I0531 19:03:10.472948   97386 system_pods.go:59] 8 kube-system pods found
	I0531 19:03:10.472987   97386 system_pods.go:61] "coredns-5d78c9869d-fntsv" [3a603b3b-cd36-4c4e-9c48-272ebf4323ee] Running
	I0531 19:03:10.473001   97386 system_pods.go:61] "etcd-multinode-697136" [ccd089f5-6d2e-49be-a654-fab118994a39] Running
	I0531 19:03:10.473008   97386 system_pods.go:61] "kindnet-hgzvz" [5519ebaa-7169-4dbb-8a30-f179ad47d28b] Running
	I0531 19:03:10.473018   97386 system_pods.go:61] "kube-apiserver-multinode-697136" [2b24f348-410a-4de9-9d78-a304f5a20e2f] Running
	I0531 19:03:10.473024   97386 system_pods.go:61] "kube-controller-manager-multinode-697136" [b6cb9f23-df26-4062-b101-d862a5798d37] Running
	I0531 19:03:10.473029   97386 system_pods.go:61] "kube-proxy-tgk57" [47badf8b-17e5-49d3-bdde-743b58a05b7d] Running
	I0531 19:03:10.473035   97386 system_pods.go:61] "kube-scheduler-multinode-697136" [e6c0e63a-e8fc-4aea-b1f0-573963eb4ad9] Running
	I0531 19:03:10.473048   97386 system_pods.go:61] "storage-provisioner" [fcd8fca8-4007-413a-bb66-be3052cea26f] Running
	I0531 19:03:10.473058   97386 system_pods.go:74] duration metric: took 188.190218ms to wait for pod list to return data ...
	I0531 19:03:10.473069   97386 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:03:10.666368   97386 request.go:628] Waited for 193.225756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 19:03:10.666413   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 19:03:10.666419   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:10.666426   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:10.666434   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:10.668742   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:10.668764   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:10.668771   97386 round_trippers.go:580]     Audit-Id: bb9c385d-283f-4067-a962-5b7a52d6724c
	I0531 19:03:10.668777   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:10.668783   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:10.668789   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:10.668794   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:10.668803   97386 round_trippers.go:580]     Content-Length: 261
	I0531 19:03:10.668809   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:10 GMT
	I0531 19:03:10.668838   97386 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"a20190c7-9d86-4ab9-9f16-b5af7a520ffb","resourceVersion":"315","creationTimestamp":"2023-05-31T19:02:36Z"}}]}
	I0531 19:03:10.669032   97386 default_sa.go:45] found service account: "default"
	I0531 19:03:10.669046   97386 default_sa.go:55] duration metric: took 195.969232ms for default service account to be created ...
	I0531 19:03:10.669054   97386 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 19:03:10.866465   97386 request.go:628] Waited for 197.343679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:03:10.866536   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:03:10.866545   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:10.866555   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:10.866569   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:10.869826   97386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 19:03:10.869848   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:10.869854   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:10.869860   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:10.869866   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:10.869872   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:10.869878   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:10 GMT
	I0531 19:03:10.869887   97386 round_trippers.go:580]     Audit-Id: 8ad4551e-421a-4c66-a9b3-bc101afb31df
	I0531 19:03:10.870256   97386 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"coredns-5d78c9869d-fntsv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3a603b3b-cd36-4c4e-9c48-272ebf4323ee","resourceVersion":"409","creationTimestamp":"2023-05-31T19:02:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"0c6b5e3a-feb8-476c-a469-98dd4afd483c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6b5e3a-feb8-476c-a469-98dd4afd483c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I0531 19:03:10.872671   97386 system_pods.go:86] 8 kube-system pods found
	I0531 19:03:10.872702   97386 system_pods.go:89] "coredns-5d78c9869d-fntsv" [3a603b3b-cd36-4c4e-9c48-272ebf4323ee] Running
	I0531 19:03:10.872710   97386 system_pods.go:89] "etcd-multinode-697136" [ccd089f5-6d2e-49be-a654-fab118994a39] Running
	I0531 19:03:10.872717   97386 system_pods.go:89] "kindnet-hgzvz" [5519ebaa-7169-4dbb-8a30-f179ad47d28b] Running
	I0531 19:03:10.872724   97386 system_pods.go:89] "kube-apiserver-multinode-697136" [2b24f348-410a-4de9-9d78-a304f5a20e2f] Running
	I0531 19:03:10.872732   97386 system_pods.go:89] "kube-controller-manager-multinode-697136" [b6cb9f23-df26-4062-b101-d862a5798d37] Running
	I0531 19:03:10.872742   97386 system_pods.go:89] "kube-proxy-tgk57" [47badf8b-17e5-49d3-bdde-743b58a05b7d] Running
	I0531 19:03:10.872750   97386 system_pods.go:89] "kube-scheduler-multinode-697136" [e6c0e63a-e8fc-4aea-b1f0-573963eb4ad9] Running
	I0531 19:03:10.872757   97386 system_pods.go:89] "storage-provisioner" [fcd8fca8-4007-413a-bb66-be3052cea26f] Running
	I0531 19:03:10.872766   97386 system_pods.go:126] duration metric: took 203.706733ms to wait for k8s-apps to be running ...
	I0531 19:03:10.872778   97386 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:03:10.872829   97386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:03:10.884862   97386 system_svc.go:56] duration metric: took 12.077453ms WaitForService to wait for kubelet.
	I0531 19:03:10.884887   97386 kubeadm.go:581] duration metric: took 32.939524249s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:03:10.884913   97386 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:03:11.066324   97386 request.go:628] Waited for 181.345989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0531 19:03:11.066407   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0531 19:03:11.066415   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:11.066428   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:11.066440   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:11.068911   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:11.068935   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:11.068945   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:11 GMT
	I0531 19:03:11.068952   97386 round_trippers.go:580]     Audit-Id: 5278ae17-28bc-49da-ace1-9e8ff61c4de9
	I0531 19:03:11.068961   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:11.068969   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:11.068978   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:11.068991   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:11.069110   97386 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0531 19:03:11.069668   97386 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0531 19:03:11.069701   97386 node_conditions.go:123] node cpu capacity is 8
	I0531 19:03:11.069718   97386 node_conditions.go:105] duration metric: took 184.800183ms to run NodePressure ...
	I0531 19:03:11.069731   97386 start.go:228] waiting for startup goroutines ...
	I0531 19:03:11.069740   97386 start.go:233] waiting for cluster config update ...
	I0531 19:03:11.069757   97386 start.go:242] writing updated cluster config ...
	I0531 19:03:11.072482   97386 out.go:177] 
	I0531 19:03:11.074384   97386 config.go:182] Loaded profile config "multinode-697136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:03:11.074477   97386 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/config.json ...
	I0531 19:03:11.076829   97386 out.go:177] * Starting worker node multinode-697136-m02 in cluster multinode-697136
	I0531 19:03:11.078580   97386 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:03:11.080275   97386 out.go:177] * Pulling base image ...
	I0531 19:03:11.082319   97386 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:03:11.082339   97386 cache.go:57] Caching tarball of preloaded images
	I0531 19:03:11.082353   97386 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 19:03:11.082417   97386 preload.go:174] Found /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 19:03:11.082428   97386 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0531 19:03:11.082505   97386 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/config.json ...
	I0531 19:03:11.098653   97386 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 19:03:11.098678   97386 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0531 19:03:11.098699   97386 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:03:11.098736   97386 start.go:364] acquiring machines lock for multinode-697136-m02: {Name:mkfbf12b8e495ab1d5f99dc080fcee98ed958910 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:03:11.098850   97386 start.go:368] acquired machines lock for "multinode-697136-m02" in 91.499µs
	I0531 19:03:11.098882   97386 start.go:93] Provisioning new machine with config: &{Name:multinode-697136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-697136 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0531 19:03:11.098997   97386 start.go:125] createHost starting for "m02" (driver="docker")
	I0531 19:03:11.101177   97386 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0531 19:03:11.101267   97386 start.go:159] libmachine.API.Create for "multinode-697136" (driver="docker")
	I0531 19:03:11.101289   97386 client.go:168] LocalClient.Create starting
	I0531 19:03:11.101358   97386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem
	I0531 19:03:11.101392   97386 main.go:141] libmachine: Decoding PEM data...
	I0531 19:03:11.101410   97386 main.go:141] libmachine: Parsing certificate...
	I0531 19:03:11.101479   97386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem
	I0531 19:03:11.101499   97386 main.go:141] libmachine: Decoding PEM data...
	I0531 19:03:11.101507   97386 main.go:141] libmachine: Parsing certificate...
	I0531 19:03:11.101680   97386 cli_runner.go:164] Run: docker network inspect multinode-697136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:03:11.117233   97386 network_create.go:76] Found existing network {name:multinode-697136 subnet:0xc0015d2870 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0531 19:03:11.117281   97386 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-697136-m02" container
	I0531 19:03:11.117332   97386 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0531 19:03:11.132546   97386 cli_runner.go:164] Run: docker volume create multinode-697136-m02 --label name.minikube.sigs.k8s.io=multinode-697136-m02 --label created_by.minikube.sigs.k8s.io=true
	I0531 19:03:11.148652   97386 oci.go:103] Successfully created a docker volume multinode-697136-m02
	I0531 19:03:11.148728   97386 cli_runner.go:164] Run: docker run --rm --name multinode-697136-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-697136-m02 --entrypoint /usr/bin/test -v multinode-697136-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -d /var/lib
	I0531 19:03:11.654858   97386 oci.go:107] Successfully prepared a docker volume multinode-697136-m02
	I0531 19:03:11.654905   97386 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 19:03:11.654927   97386 kic.go:190] Starting extracting preloaded images to volume ...
	I0531 19:03:11.654998   97386 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-697136-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0531 19:03:16.560595   97386 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-697136-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.905534927s)
	I0531 19:03:16.560622   97386 kic.go:199] duration metric: took 4.905693 seconds to extract preloaded images to volume
	W0531 19:03:16.560732   97386 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0531 19:03:16.560820   97386 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0531 19:03:16.608715   97386 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-697136-m02 --name multinode-697136-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-697136-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-697136-m02 --network multinode-697136 --ip 192.168.58.3 --volume multinode-697136-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0531 19:03:16.910664   97386 cli_runner.go:164] Run: docker container inspect multinode-697136-m02 --format={{.State.Running}}
	I0531 19:03:16.926792   97386 cli_runner.go:164] Run: docker container inspect multinode-697136-m02 --format={{.State.Status}}
	I0531 19:03:16.944567   97386 cli_runner.go:164] Run: docker exec multinode-697136-m02 stat /var/lib/dpkg/alternatives/iptables
	I0531 19:03:17.005323   97386 oci.go:144] the created container "multinode-697136-m02" has a running status.
	I0531 19:03:17.005352   97386 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136-m02/id_rsa...
	I0531 19:03:17.187902   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0531 19:03:17.187945   97386 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0531 19:03:17.212472   97386 cli_runner.go:164] Run: docker container inspect multinode-697136-m02 --format={{.State.Status}}
	I0531 19:03:17.227695   97386 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0531 19:03:17.227720   97386 kic_runner.go:114] Args: [docker exec --privileged multinode-697136-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0531 19:03:17.294592   97386 cli_runner.go:164] Run: docker container inspect multinode-697136-m02 --format={{.State.Status}}
	I0531 19:03:17.311806   97386 machine.go:88] provisioning docker machine ...
	I0531 19:03:17.311850   97386 ubuntu.go:169] provisioning hostname "multinode-697136-m02"
	I0531 19:03:17.311917   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136-m02
	I0531 19:03:17.337037   97386 main.go:141] libmachine: Using SSH client type: native
	I0531 19:03:17.337619   97386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0531 19:03:17.337640   97386 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-697136-m02 && echo "multinode-697136-m02" | sudo tee /etc/hostname
	I0531 19:03:17.338269   97386 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52216->127.0.0.1:32852: read: connection reset by peer
	I0531 19:03:20.462797   97386 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-697136-m02
	
	I0531 19:03:20.462878   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136-m02
	I0531 19:03:20.479124   97386 main.go:141] libmachine: Using SSH client type: native
	I0531 19:03:20.479536   97386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0531 19:03:20.479554   97386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-697136-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-697136-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-697136-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:03:20.596424   97386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:03:20.596457   97386 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-7270/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-7270/.minikube}
	I0531 19:03:20.596475   97386 ubuntu.go:177] setting up certificates
	I0531 19:03:20.596482   97386 provision.go:83] configureAuth start
	I0531 19:03:20.596524   97386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-697136-m02
	I0531 19:03:20.612976   97386 provision.go:138] copyHostCerts
	I0531 19:03:20.613012   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem
	I0531 19:03:20.613044   97386 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem, removing ...
	I0531 19:03:20.613051   97386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem
	I0531 19:03:20.613114   97386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem (1078 bytes)
	I0531 19:03:20.613183   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem
	I0531 19:03:20.613200   97386 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem, removing ...
	I0531 19:03:20.613204   97386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem
	I0531 19:03:20.613225   97386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem (1123 bytes)
	I0531 19:03:20.613267   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem
	I0531 19:03:20.613283   97386 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem, removing ...
	I0531 19:03:20.613289   97386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem
	I0531 19:03:20.613307   97386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem (1675 bytes)
	I0531 19:03:20.613351   97386 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem org=jenkins.multinode-697136-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-697136-m02]
	I0531 19:03:20.732625   97386 provision.go:172] copyRemoteCerts
	I0531 19:03:20.732682   97386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:03:20.732720   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136-m02
	I0531 19:03:20.749294   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136-m02/id_rsa Username:docker}
	I0531 19:03:20.832972   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 19:03:20.833041   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:03:20.854110   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 19:03:20.854170   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0531 19:03:20.874588   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 19:03:20.874653   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 19:03:20.896661   97386 provision.go:86] duration metric: configureAuth took 300.167163ms
	I0531 19:03:20.896685   97386 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:03:20.896843   97386 config.go:182] Loaded profile config "multinode-697136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:03:20.896924   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136-m02
	I0531 19:03:20.914748   97386 main.go:141] libmachine: Using SSH client type: native
	I0531 19:03:20.915186   97386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0531 19:03:20.915213   97386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:03:21.108907   97386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:03:21.108933   97386 machine.go:91] provisioned docker machine in 3.797101746s
	I0531 19:03:21.108942   97386 client.go:171] LocalClient.Create took 10.007646828s
	I0531 19:03:21.108958   97386 start.go:167] duration metric: libmachine.API.Create for "multinode-697136" took 10.007691028s
	I0531 19:03:21.108965   97386 start.go:300] post-start starting for "multinode-697136-m02" (driver="docker")
	I0531 19:03:21.108970   97386 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:03:21.109017   97386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:03:21.109056   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136-m02
	I0531 19:03:21.126201   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136-m02/id_rsa Username:docker}
	I0531 19:03:21.212745   97386 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:03:21.215646   97386 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0531 19:03:21.215662   97386 command_runner.go:130] > NAME="Ubuntu"
	I0531 19:03:21.215668   97386 command_runner.go:130] > VERSION_ID="22.04"
	I0531 19:03:21.215680   97386 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0531 19:03:21.215684   97386 command_runner.go:130] > VERSION_CODENAME=jammy
	I0531 19:03:21.215688   97386 command_runner.go:130] > ID=ubuntu
	I0531 19:03:21.215692   97386 command_runner.go:130] > ID_LIKE=debian
	I0531 19:03:21.215696   97386 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0531 19:03:21.215700   97386 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0531 19:03:21.215707   97386 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0531 19:03:21.215713   97386 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0531 19:03:21.215717   97386 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0531 19:03:21.215772   97386 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:03:21.215794   97386 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:03:21.215802   97386 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:03:21.215810   97386 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 19:03:21.215819   97386 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/addons for local assets ...
	I0531 19:03:21.215867   97386 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/files for local assets ...
	I0531 19:03:21.215940   97386 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> 142322.pem in /etc/ssl/certs
	I0531 19:03:21.215953   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> /etc/ssl/certs/142322.pem
	I0531 19:03:21.216026   97386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:03:21.223761   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem --> /etc/ssl/certs/142322.pem (1708 bytes)
	I0531 19:03:21.245079   97386 start.go:303] post-start completed in 136.099205ms
	I0531 19:03:21.245429   97386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-697136-m02
	I0531 19:03:21.261376   97386 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/config.json ...
	I0531 19:03:21.261643   97386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:03:21.261702   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136-m02
	I0531 19:03:21.277176   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136-m02/id_rsa Username:docker}
	I0531 19:03:21.360848   97386 command_runner.go:130] > 17%!
	(MISSING)I0531 19:03:21.360919   97386 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:03:21.364888   97386 command_runner.go:130] > 243G
	I0531 19:03:21.365086   97386 start.go:128] duration metric: createHost completed in 10.266076616s
	I0531 19:03:21.365109   97386 start.go:83] releasing machines lock for "multinode-697136-m02", held for 10.266242788s
	I0531 19:03:21.365173   97386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-697136-m02
	I0531 19:03:21.382643   97386 out.go:177] * Found network options:
	I0531 19:03:21.384598   97386 out.go:177]   - NO_PROXY=192.168.58.2
	W0531 19:03:21.386769   97386 proxy.go:119] fail to check proxy env: Error ip not in block
	W0531 19:03:21.386808   97386 proxy.go:119] fail to check proxy env: Error ip not in block
	I0531 19:03:21.386868   97386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:03:21.386904   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136-m02
	I0531 19:03:21.386977   97386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:03:21.387046   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136-m02
	I0531 19:03:21.404568   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136-m02/id_rsa Username:docker}
	I0531 19:03:21.404864   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136-m02/id_rsa Username:docker}
	I0531 19:03:21.624110   97386 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0531 19:03:21.624117   97386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:03:21.628348   97386 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0531 19:03:21.628373   97386 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0531 19:03:21.628382   97386 command_runner.go:130] > Device: b0h/176d	Inode: 792944      Links: 1
	I0531 19:03:21.628390   97386 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:03:21.628398   97386 command_runner.go:130] > Access: 2023-04-04 14:31:21.000000000 +0000
	I0531 19:03:21.628406   97386 command_runner.go:130] > Modify: 2023-04-04 14:31:21.000000000 +0000
	I0531 19:03:21.628413   97386 command_runner.go:130] > Change: 2023-05-31 18:43:50.527806978 +0000
	I0531 19:03:21.628422   97386 command_runner.go:130] >  Birth: 2023-05-31 18:43:50.527806978 +0000
	I0531 19:03:21.628487   97386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:03:21.646366   97386 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 19:03:21.646453   97386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:03:21.672648   97386 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0531 19:03:21.672682   97386 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0531 19:03:21.672688   97386 start.go:481] detecting cgroup driver to use...
	I0531 19:03:21.672715   97386 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 19:03:21.672765   97386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:03:21.686605   97386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:03:21.696507   97386 docker.go:193] disabling cri-docker service (if available) ...
	I0531 19:03:21.696554   97386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:03:21.708180   97386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:03:21.720701   97386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:03:21.797212   97386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:03:21.877958   97386 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0531 19:03:21.877995   97386 docker.go:209] disabling docker service ...
	I0531 19:03:21.878043   97386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:03:21.895789   97386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:03:21.905893   97386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:03:21.915920   97386 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0531 19:03:21.981764   97386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:03:21.992150   97386 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0531 19:03:22.059541   97386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:03:22.070566   97386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:03:22.083960   97386 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0531 19:03:22.084638   97386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 19:03:22.084696   97386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:03:22.093130   97386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:03:22.093179   97386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:03:22.101585   97386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:03:22.109752   97386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:03:22.118205   97386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:03:22.126415   97386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:03:22.133423   97386 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0531 19:03:22.134013   97386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:03:22.141439   97386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:03:22.205430   97386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:03:22.298158   97386 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:03:22.298222   97386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:03:22.301652   97386 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0531 19:03:22.301672   97386 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0531 19:03:22.301681   97386 command_runner.go:130] > Device: b9h/185d	Inode: 186         Links: 1
	I0531 19:03:22.301688   97386 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:03:22.301695   97386 command_runner.go:130] > Access: 2023-05-31 19:03:22.285918124 +0000
	I0531 19:03:22.301705   97386 command_runner.go:130] > Modify: 2023-05-31 19:03:22.285918124 +0000
	I0531 19:03:22.301718   97386 command_runner.go:130] > Change: 2023-05-31 19:03:22.285918124 +0000
	I0531 19:03:22.301727   97386 command_runner.go:130] >  Birth: -
	I0531 19:03:22.301744   97386 start.go:549] Will wait 60s for crictl version
	I0531 19:03:22.301778   97386 ssh_runner.go:195] Run: which crictl
	I0531 19:03:22.304757   97386 command_runner.go:130] > /usr/bin/crictl
	I0531 19:03:22.304818   97386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:03:22.334061   97386 command_runner.go:130] > Version:  0.1.0
	I0531 19:03:22.334082   97386 command_runner.go:130] > RuntimeName:  cri-o
	I0531 19:03:22.334086   97386 command_runner.go:130] > RuntimeVersion:  1.24.5
	I0531 19:03:22.334091   97386 command_runner.go:130] > RuntimeApiVersion:  v1
	I0531 19:03:22.335839   97386 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 19:03:22.335911   97386 ssh_runner.go:195] Run: crio --version
	I0531 19:03:22.367960   97386 command_runner.go:130] > crio version 1.24.5
	I0531 19:03:22.367983   97386 command_runner.go:130] > Version:          1.24.5
	I0531 19:03:22.367997   97386 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0531 19:03:22.368004   97386 command_runner.go:130] > GitTreeState:     clean
	I0531 19:03:22.368013   97386 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0531 19:03:22.368024   97386 command_runner.go:130] > GoVersion:        go1.18.2
	I0531 19:03:22.368032   97386 command_runner.go:130] > Compiler:         gc
	I0531 19:03:22.368042   97386 command_runner.go:130] > Platform:         linux/amd64
	I0531 19:03:22.368047   97386 command_runner.go:130] > Linkmode:         dynamic
	I0531 19:03:22.368054   97386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0531 19:03:22.368058   97386 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:03:22.368065   97386 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:03:22.369559   97386 ssh_runner.go:195] Run: crio --version
	I0531 19:03:22.399530   97386 command_runner.go:130] > crio version 1.24.5
	I0531 19:03:22.399552   97386 command_runner.go:130] > Version:          1.24.5
	I0531 19:03:22.399577   97386 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0531 19:03:22.399582   97386 command_runner.go:130] > GitTreeState:     clean
	I0531 19:03:22.399588   97386 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0531 19:03:22.399595   97386 command_runner.go:130] > GoVersion:        go1.18.2
	I0531 19:03:22.399603   97386 command_runner.go:130] > Compiler:         gc
	I0531 19:03:22.399616   97386 command_runner.go:130] > Platform:         linux/amd64
	I0531 19:03:22.399624   97386 command_runner.go:130] > Linkmode:         dynamic
	I0531 19:03:22.399638   97386 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0531 19:03:22.399648   97386 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:03:22.399655   97386 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:03:22.403099   97386 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0531 19:03:22.404813   97386 out.go:177]   - env NO_PROXY=192.168.58.2
	I0531 19:03:22.406492   97386 cli_runner.go:164] Run: docker network inspect multinode-697136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:03:22.422601   97386 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0531 19:03:22.426202   97386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:03:22.436317   97386 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136 for IP: 192.168.58.3
	I0531 19:03:22.436353   97386 certs.go:190] acquiring lock for shared ca certs: {Name:mkbc42e9eaddef0752bd9f3cb948d1ed478bdf0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:03:22.436504   97386 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key
	I0531 19:03:22.436550   97386 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key
	I0531 19:03:22.436563   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 19:03:22.436580   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 19:03:22.436597   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 19:03:22.436614   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 19:03:22.436679   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232.pem (1338 bytes)
	W0531 19:03:22.436726   97386 certs.go:433] ignoring /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232_empty.pem, impossibly tiny 0 bytes
	I0531 19:03:22.436740   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem (1679 bytes)
	I0531 19:03:22.436773   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem (1078 bytes)
	I0531 19:03:22.436805   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:03:22.436848   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem (1675 bytes)
	I0531 19:03:22.436902   97386 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem (1708 bytes)
	I0531 19:03:22.436944   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:03:22.436962   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232.pem -> /usr/share/ca-certificates/14232.pem
	I0531 19:03:22.436981   97386 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> /usr/share/ca-certificates/142322.pem
	I0531 19:03:22.437300   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:03:22.458062   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:03:22.478843   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:03:22.499226   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 19:03:22.519769   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:03:22.541463   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232.pem --> /usr/share/ca-certificates/14232.pem (1338 bytes)
	I0531 19:03:22.562201   97386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem --> /usr/share/ca-certificates/142322.pem (1708 bytes)
	I0531 19:03:22.582949   97386 ssh_runner.go:195] Run: openssl version
	I0531 19:03:22.587933   97386 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0531 19:03:22.588112   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142322.pem && ln -fs /usr/share/ca-certificates/142322.pem /etc/ssl/certs/142322.pem"
	I0531 19:03:22.596383   97386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142322.pem
	I0531 19:03:22.599406   97386 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 31 18:49 /usr/share/ca-certificates/142322.pem
	I0531 19:03:22.599444   97386 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 31 18:49 /usr/share/ca-certificates/142322.pem
	I0531 19:03:22.599498   97386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142322.pem
	I0531 19:03:22.605189   97386 command_runner.go:130] > 3ec20f2e
	I0531 19:03:22.605353   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142322.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:03:22.613444   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:03:22.621403   97386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:03:22.624656   97386 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 31 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:03:22.624691   97386 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:03:22.624734   97386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:03:22.630528   97386 command_runner.go:130] > b5213941
	I0531 19:03:22.630689   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:03:22.638729   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14232.pem && ln -fs /usr/share/ca-certificates/14232.pem /etc/ssl/certs/14232.pem"
	I0531 19:03:22.646721   97386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14232.pem
	I0531 19:03:22.649862   97386 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 31 18:49 /usr/share/ca-certificates/14232.pem
	I0531 19:03:22.649906   97386 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 31 18:49 /usr/share/ca-certificates/14232.pem
	I0531 19:03:22.649943   97386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14232.pem
	I0531 19:03:22.656221   97386 command_runner.go:130] > 51391683
	I0531 19:03:22.656288   97386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14232.pem /etc/ssl/certs/51391683.0"
	I0531 19:03:22.664271   97386 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 19:03:22.667152   97386 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 19:03:22.667193   97386 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0531 19:03:22.667272   97386 ssh_runner.go:195] Run: crio config
	I0531 19:03:22.701950   97386 command_runner.go:130] ! time="2023-05-31 19:03:22.701558846Z" level=info msg="Starting CRI-O, version: 1.24.5, git: b007cb6753d97de6218787b6894b0e3cc1dc8ecd(clean)"
	I0531 19:03:22.701979   97386 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0531 19:03:22.706449   97386 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0531 19:03:22.706473   97386 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0531 19:03:22.706480   97386 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0531 19:03:22.706483   97386 command_runner.go:130] > #
	I0531 19:03:22.706490   97386 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0531 19:03:22.706495   97386 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0531 19:03:22.706501   97386 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0531 19:03:22.706508   97386 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0531 19:03:22.706511   97386 command_runner.go:130] > # reload'.
	I0531 19:03:22.706517   97386 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0531 19:03:22.706527   97386 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0531 19:03:22.706536   97386 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0531 19:03:22.706542   97386 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0531 19:03:22.706547   97386 command_runner.go:130] > [crio]
	I0531 19:03:22.706553   97386 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0531 19:03:22.706560   97386 command_runner.go:130] > # containers images, in this directory.
	I0531 19:03:22.706585   97386 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0531 19:03:22.706596   97386 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0531 19:03:22.706601   97386 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0531 19:03:22.706607   97386 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0531 19:03:22.706615   97386 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0531 19:03:22.706621   97386 command_runner.go:130] > # storage_driver = "vfs"
	I0531 19:03:22.706627   97386 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0531 19:03:22.706635   97386 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0531 19:03:22.706640   97386 command_runner.go:130] > # storage_option = [
	I0531 19:03:22.706649   97386 command_runner.go:130] > # ]
	I0531 19:03:22.706657   97386 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0531 19:03:22.706666   97386 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0531 19:03:22.706686   97386 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0531 19:03:22.706695   97386 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0531 19:03:22.706701   97386 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0531 19:03:22.706707   97386 command_runner.go:130] > # always happen on a node reboot
	I0531 19:03:22.706712   97386 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0531 19:03:22.706722   97386 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0531 19:03:22.706731   97386 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0531 19:03:22.706745   97386 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0531 19:03:22.706753   97386 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0531 19:03:22.706763   97386 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0531 19:03:22.706773   97386 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0531 19:03:22.706779   97386 command_runner.go:130] > # internal_wipe = true
	I0531 19:03:22.706785   97386 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0531 19:03:22.706793   97386 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0531 19:03:22.706800   97386 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0531 19:03:22.706805   97386 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0531 19:03:22.706813   97386 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0531 19:03:22.706817   97386 command_runner.go:130] > [crio.api]
	I0531 19:03:22.706824   97386 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0531 19:03:22.706831   97386 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0531 19:03:22.706836   97386 command_runner.go:130] > # IP address on which the stream server will listen.
	I0531 19:03:22.706843   97386 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0531 19:03:22.706849   97386 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0531 19:03:22.706856   97386 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0531 19:03:22.706860   97386 command_runner.go:130] > # stream_port = "0"
	I0531 19:03:22.706867   97386 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0531 19:03:22.706872   97386 command_runner.go:130] > # stream_enable_tls = false
	I0531 19:03:22.706877   97386 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0531 19:03:22.706882   97386 command_runner.go:130] > # stream_idle_timeout = ""
	I0531 19:03:22.706888   97386 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0531 19:03:22.706896   97386 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0531 19:03:22.706901   97386 command_runner.go:130] > # minutes.
	I0531 19:03:22.706906   97386 command_runner.go:130] > # stream_tls_cert = ""
	I0531 19:03:22.706914   97386 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0531 19:03:22.706923   97386 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0531 19:03:22.706930   97386 command_runner.go:130] > # stream_tls_key = ""
	I0531 19:03:22.706939   97386 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0531 19:03:22.706948   97386 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0531 19:03:22.706955   97386 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0531 19:03:22.706962   97386 command_runner.go:130] > # stream_tls_ca = ""
	I0531 19:03:22.706969   97386 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0531 19:03:22.706976   97386 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0531 19:03:22.706983   97386 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0531 19:03:22.706989   97386 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0531 19:03:22.707010   97386 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0531 19:03:22.707017   97386 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0531 19:03:22.707021   97386 command_runner.go:130] > [crio.runtime]
	I0531 19:03:22.707027   97386 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0531 19:03:22.707035   97386 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0531 19:03:22.707039   97386 command_runner.go:130] > # "nofile=1024:2048"
	I0531 19:03:22.707047   97386 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0531 19:03:22.707053   97386 command_runner.go:130] > # default_ulimits = [
	I0531 19:03:22.707056   97386 command_runner.go:130] > # ]
	I0531 19:03:22.707062   97386 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0531 19:03:22.707070   97386 command_runner.go:130] > # no_pivot = false
	I0531 19:03:22.707078   97386 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0531 19:03:22.707086   97386 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0531 19:03:22.707093   97386 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0531 19:03:22.707099   97386 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0531 19:03:22.707106   97386 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0531 19:03:22.707112   97386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:03:22.707120   97386 command_runner.go:130] > # conmon = ""
	I0531 19:03:22.707127   97386 command_runner.go:130] > # Cgroup setting for conmon
	I0531 19:03:22.707134   97386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0531 19:03:22.707142   97386 command_runner.go:130] > conmon_cgroup = "pod"
	I0531 19:03:22.707150   97386 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0531 19:03:22.707158   97386 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0531 19:03:22.707166   97386 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:03:22.707172   97386 command_runner.go:130] > # conmon_env = [
	I0531 19:03:22.707175   97386 command_runner.go:130] > # ]
	I0531 19:03:22.707180   97386 command_runner.go:130] > # Additional environment variables to set for all the
	I0531 19:03:22.707187   97386 command_runner.go:130] > # containers. These are overridden if set in the
	I0531 19:03:22.707196   97386 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0531 19:03:22.707202   97386 command_runner.go:130] > # default_env = [
	I0531 19:03:22.707206   97386 command_runner.go:130] > # ]
	I0531 19:03:22.707214   97386 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0531 19:03:22.707218   97386 command_runner.go:130] > # selinux = false
	I0531 19:03:22.707225   97386 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0531 19:03:22.707233   97386 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0531 19:03:22.707240   97386 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0531 19:03:22.707248   97386 command_runner.go:130] > # seccomp_profile = ""
	I0531 19:03:22.707254   97386 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0531 19:03:22.707262   97386 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0531 19:03:22.707268   97386 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0531 19:03:22.707274   97386 command_runner.go:130] > # which might increase security.
	I0531 19:03:22.707279   97386 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0531 19:03:22.707287   97386 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0531 19:03:22.707296   97386 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0531 19:03:22.707306   97386 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0531 19:03:22.707314   97386 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0531 19:03:22.707324   97386 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:03:22.707331   97386 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0531 19:03:22.707336   97386 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0531 19:03:22.707342   97386 command_runner.go:130] > # the cgroup blockio controller.
	I0531 19:03:22.707347   97386 command_runner.go:130] > # blockio_config_file = ""
	I0531 19:03:22.707355   97386 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0531 19:03:22.707361   97386 command_runner.go:130] > # irqbalance daemon.
	I0531 19:03:22.707366   97386 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0531 19:03:22.707375   97386 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0531 19:03:22.707382   97386 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:03:22.707386   97386 command_runner.go:130] > # rdt_config_file = ""
	I0531 19:03:22.707393   97386 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0531 19:03:22.707399   97386 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0531 19:03:22.707407   97386 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0531 19:03:22.707413   97386 command_runner.go:130] > # separate_pull_cgroup = ""
	I0531 19:03:22.707419   97386 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0531 19:03:22.707427   97386 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0531 19:03:22.707431   97386 command_runner.go:130] > # will be added.
	I0531 19:03:22.707440   97386 command_runner.go:130] > # default_capabilities = [
	I0531 19:03:22.707446   97386 command_runner.go:130] > # 	"CHOWN",
	I0531 19:03:22.707450   97386 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0531 19:03:22.707457   97386 command_runner.go:130] > # 	"FSETID",
	I0531 19:03:22.707464   97386 command_runner.go:130] > # 	"FOWNER",
	I0531 19:03:22.707468   97386 command_runner.go:130] > # 	"SETGID",
	I0531 19:03:22.707474   97386 command_runner.go:130] > # 	"SETUID",
	I0531 19:03:22.707478   97386 command_runner.go:130] > # 	"SETPCAP",
	I0531 19:03:22.707484   97386 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0531 19:03:22.707488   97386 command_runner.go:130] > # 	"KILL",
	I0531 19:03:22.707493   97386 command_runner.go:130] > # ]
	I0531 19:03:22.707500   97386 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0531 19:03:22.707509   97386 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0531 19:03:22.707515   97386 command_runner.go:130] > # add_inheritable_capabilities = true
	I0531 19:03:22.707522   97386 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0531 19:03:22.707530   97386 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:03:22.707536   97386 command_runner.go:130] > # default_sysctls = [
	I0531 19:03:22.707539   97386 command_runner.go:130] > # ]
	I0531 19:03:22.707549   97386 command_runner.go:130] > # List of devices on the host that a
	I0531 19:03:22.707557   97386 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0531 19:03:22.707563   97386 command_runner.go:130] > # allowed_devices = [
	I0531 19:03:22.707567   97386 command_runner.go:130] > # 	"/dev/fuse",
	I0531 19:03:22.707572   97386 command_runner.go:130] > # ]
	I0531 19:03:22.707577   97386 command_runner.go:130] > # List of additional devices. specified as
	I0531 19:03:22.707614   97386 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0531 19:03:22.707622   97386 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0531 19:03:22.707628   97386 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:03:22.707632   97386 command_runner.go:130] > # additional_devices = [
	I0531 19:03:22.707638   97386 command_runner.go:130] > # ]
	I0531 19:03:22.707643   97386 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0531 19:03:22.707649   97386 command_runner.go:130] > # cdi_spec_dirs = [
	I0531 19:03:22.707653   97386 command_runner.go:130] > # 	"/etc/cdi",
	I0531 19:03:22.707660   97386 command_runner.go:130] > # 	"/var/run/cdi",
	I0531 19:03:22.707667   97386 command_runner.go:130] > # ]
	I0531 19:03:22.707677   97386 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0531 19:03:22.707685   97386 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0531 19:03:22.707692   97386 command_runner.go:130] > # Defaults to false.
	I0531 19:03:22.707699   97386 command_runner.go:130] > # device_ownership_from_security_context = false
	I0531 19:03:22.707705   97386 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0531 19:03:22.707713   97386 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0531 19:03:22.707720   97386 command_runner.go:130] > # hooks_dir = [
	I0531 19:03:22.707724   97386 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0531 19:03:22.707730   97386 command_runner.go:130] > # ]
	I0531 19:03:22.707736   97386 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0531 19:03:22.707744   97386 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0531 19:03:22.707751   97386 command_runner.go:130] > # its default mounts from the following two files:
	I0531 19:03:22.707755   97386 command_runner.go:130] > #
	I0531 19:03:22.707761   97386 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0531 19:03:22.707769   97386 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0531 19:03:22.707777   97386 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0531 19:03:22.707782   97386 command_runner.go:130] > #
	I0531 19:03:22.707788   97386 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0531 19:03:22.707796   97386 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0531 19:03:22.707804   97386 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0531 19:03:22.707814   97386 command_runner.go:130] > #      only add mounts it finds in this file.
	I0531 19:03:22.707821   97386 command_runner.go:130] > #
	I0531 19:03:22.707826   97386 command_runner.go:130] > # default_mounts_file = ""
	I0531 19:03:22.707833   97386 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0531 19:03:22.707840   97386 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0531 19:03:22.707843   97386 command_runner.go:130] > # pids_limit = 0
	I0531 19:03:22.707851   97386 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0531 19:03:22.707857   97386 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0531 19:03:22.707865   97386 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0531 19:03:22.707875   97386 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0531 19:03:22.707879   97386 command_runner.go:130] > # log_size_max = -1
	I0531 19:03:22.707888   97386 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0531 19:03:22.707894   97386 command_runner.go:130] > # log_to_journald = false
	I0531 19:03:22.707900   97386 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0531 19:03:22.707906   97386 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0531 19:03:22.707911   97386 command_runner.go:130] > # Path to directory for container attach sockets.
	I0531 19:03:22.707918   97386 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0531 19:03:22.707923   97386 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0531 19:03:22.707933   97386 command_runner.go:130] > # bind_mount_prefix = ""
	I0531 19:03:22.707940   97386 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0531 19:03:22.707948   97386 command_runner.go:130] > # read_only = false
	I0531 19:03:22.707953   97386 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0531 19:03:22.707961   97386 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0531 19:03:22.707968   97386 command_runner.go:130] > # live configuration reload.
	I0531 19:03:22.707971   97386 command_runner.go:130] > # log_level = "info"
	I0531 19:03:22.707978   97386 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0531 19:03:22.707984   97386 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:03:22.707988   97386 command_runner.go:130] > # log_filter = ""
	I0531 19:03:22.707993   97386 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0531 19:03:22.708001   97386 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0531 19:03:22.708006   97386 command_runner.go:130] > # separated by comma.
	I0531 19:03:22.708012   97386 command_runner.go:130] > # uid_mappings = ""
	I0531 19:03:22.708017   97386 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0531 19:03:22.708027   97386 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0531 19:03:22.708034   97386 command_runner.go:130] > # separated by comma.
	I0531 19:03:22.708038   97386 command_runner.go:130] > # gid_mappings = ""
	I0531 19:03:22.708049   97386 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0531 19:03:22.708057   97386 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:03:22.708065   97386 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:03:22.708071   97386 command_runner.go:130] > # minimum_mappable_uid = -1
	I0531 19:03:22.708078   97386 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0531 19:03:22.708086   97386 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:03:22.708091   97386 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:03:22.708095   97386 command_runner.go:130] > # minimum_mappable_gid = -1
	I0531 19:03:22.708103   97386 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0531 19:03:22.708109   97386 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0531 19:03:22.708117   97386 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0531 19:03:22.708120   97386 command_runner.go:130] > # ctr_stop_timeout = 30
	I0531 19:03:22.708128   97386 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0531 19:03:22.708139   97386 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0531 19:03:22.708145   97386 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0531 19:03:22.708150   97386 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0531 19:03:22.708156   97386 command_runner.go:130] > # drop_infra_ctr = true
	I0531 19:03:22.708163   97386 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0531 19:03:22.708173   97386 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0531 19:03:22.708180   97386 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0531 19:03:22.708187   97386 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0531 19:03:22.708195   97386 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0531 19:03:22.708202   97386 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0531 19:03:22.708206   97386 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0531 19:03:22.708215   97386 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0531 19:03:22.708223   97386 command_runner.go:130] > # pinns_path = ""
	I0531 19:03:22.708232   97386 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0531 19:03:22.708241   97386 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0531 19:03:22.708249   97386 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0531 19:03:22.708255   97386 command_runner.go:130] > # default_runtime = "runc"
	I0531 19:03:22.708260   97386 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0531 19:03:22.708267   97386 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0531 19:03:22.708278   97386 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0531 19:03:22.708286   97386 command_runner.go:130] > # creation as a file is not desired either.
	I0531 19:03:22.708309   97386 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0531 19:03:22.708321   97386 command_runner.go:130] > # the hostname is being managed dynamically.
	I0531 19:03:22.708331   97386 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0531 19:03:22.708339   97386 command_runner.go:130] > # ]
	I0531 19:03:22.708345   97386 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0531 19:03:22.708354   97386 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0531 19:03:22.708362   97386 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0531 19:03:22.708371   97386 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0531 19:03:22.708376   97386 command_runner.go:130] > #
	I0531 19:03:22.708384   97386 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0531 19:03:22.708393   97386 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0531 19:03:22.708401   97386 command_runner.go:130] > #  runtime_type = "oci"
	I0531 19:03:22.708409   97386 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0531 19:03:22.708413   97386 command_runner.go:130] > #  privileged_without_host_devices = false
	I0531 19:03:22.708420   97386 command_runner.go:130] > #  allowed_annotations = []
	I0531 19:03:22.708423   97386 command_runner.go:130] > # Where:
	I0531 19:03:22.708431   97386 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0531 19:03:22.708440   97386 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0531 19:03:22.708448   97386 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0531 19:03:22.708456   97386 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0531 19:03:22.708465   97386 command_runner.go:130] > #   in $PATH.
	I0531 19:03:22.708474   97386 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0531 19:03:22.708479   97386 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0531 19:03:22.708487   97386 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0531 19:03:22.708493   97386 command_runner.go:130] > #   state.
	I0531 19:03:22.708499   97386 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0531 19:03:22.708507   97386 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0531 19:03:22.708513   97386 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0531 19:03:22.708521   97386 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0531 19:03:22.708530   97386 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0531 19:03:22.708538   97386 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0531 19:03:22.708545   97386 command_runner.go:130] > #   The currently recognized values are:
	I0531 19:03:22.708551   97386 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0531 19:03:22.708562   97386 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0531 19:03:22.708570   97386 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0531 19:03:22.708579   97386 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0531 19:03:22.708586   97386 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0531 19:03:22.708595   97386 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0531 19:03:22.708605   97386 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0531 19:03:22.708614   97386 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0531 19:03:22.708621   97386 command_runner.go:130] > #   should be moved to the container's cgroup
	I0531 19:03:22.708625   97386 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0531 19:03:22.708632   97386 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0531 19:03:22.708636   97386 command_runner.go:130] > runtime_type = "oci"
	I0531 19:03:22.708643   97386 command_runner.go:130] > runtime_root = "/run/runc"
	I0531 19:03:22.708647   97386 command_runner.go:130] > runtime_config_path = ""
	I0531 19:03:22.708653   97386 command_runner.go:130] > monitor_path = ""
	I0531 19:03:22.708657   97386 command_runner.go:130] > monitor_cgroup = ""
	I0531 19:03:22.708665   97386 command_runner.go:130] > monitor_exec_cgroup = ""
	I0531 19:03:22.708716   97386 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0531 19:03:22.708723   97386 command_runner.go:130] > # running containers
	I0531 19:03:22.708727   97386 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0531 19:03:22.708733   97386 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0531 19:03:22.708742   97386 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0531 19:03:22.708749   97386 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0531 19:03:22.708754   97386 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0531 19:03:22.708767   97386 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0531 19:03:22.708776   97386 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0531 19:03:22.708783   97386 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0531 19:03:22.708788   97386 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0531 19:03:22.708795   97386 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0531 19:03:22.708801   97386 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0531 19:03:22.708809   97386 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0531 19:03:22.708817   97386 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0531 19:03:22.708827   97386 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0531 19:03:22.708837   97386 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0531 19:03:22.708843   97386 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0531 19:03:22.708854   97386 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0531 19:03:22.708863   97386 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0531 19:03:22.708871   97386 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0531 19:03:22.708880   97386 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0531 19:03:22.708886   97386 command_runner.go:130] > # Example:
	I0531 19:03:22.708901   97386 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0531 19:03:22.708909   97386 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0531 19:03:22.708918   97386 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0531 19:03:22.708925   97386 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0531 19:03:22.708929   97386 command_runner.go:130] > # cpuset = 0
	I0531 19:03:22.708932   97386 command_runner.go:130] > # cpushares = "0-1"
	I0531 19:03:22.708939   97386 command_runner.go:130] > # Where:
	I0531 19:03:22.708944   97386 command_runner.go:130] > # The workload name is workload-type.
	I0531 19:03:22.708953   97386 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0531 19:03:22.708962   97386 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0531 19:03:22.708970   97386 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0531 19:03:22.708980   97386 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0531 19:03:22.708986   97386 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0531 19:03:22.708991   97386 command_runner.go:130] > # 
	I0531 19:03:22.708997   97386 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0531 19:03:22.709003   97386 command_runner.go:130] > #
	I0531 19:03:22.709008   97386 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0531 19:03:22.709017   97386 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0531 19:03:22.709023   97386 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0531 19:03:22.709031   97386 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0531 19:03:22.709042   97386 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0531 19:03:22.709048   97386 command_runner.go:130] > [crio.image]
	I0531 19:03:22.709054   97386 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0531 19:03:22.709061   97386 command_runner.go:130] > # default_transport = "docker://"
	I0531 19:03:22.709067   97386 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0531 19:03:22.709075   97386 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:03:22.709081   97386 command_runner.go:130] > # global_auth_file = ""
	I0531 19:03:22.709087   97386 command_runner.go:130] > # The image used to instantiate infra containers.
	I0531 19:03:22.709093   97386 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:03:22.709098   97386 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0531 19:03:22.709107   97386 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0531 19:03:22.709113   97386 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:03:22.709120   97386 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:03:22.709124   97386 command_runner.go:130] > # pause_image_auth_file = ""
	I0531 19:03:22.709132   97386 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0531 19:03:22.709142   97386 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0531 19:03:22.709151   97386 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0531 19:03:22.709159   97386 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0531 19:03:22.709168   97386 command_runner.go:130] > # pause_command = "/pause"
	I0531 19:03:22.709176   97386 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0531 19:03:22.709185   97386 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0531 19:03:22.709193   97386 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0531 19:03:22.709199   97386 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0531 19:03:22.709206   97386 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0531 19:03:22.709210   97386 command_runner.go:130] > # signature_policy = ""
	I0531 19:03:22.709222   97386 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0531 19:03:22.709231   97386 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0531 19:03:22.709237   97386 command_runner.go:130] > # changing them here.
	I0531 19:03:22.709241   97386 command_runner.go:130] > # insecure_registries = [
	I0531 19:03:22.709246   97386 command_runner.go:130] > # ]
	I0531 19:03:22.709252   97386 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0531 19:03:22.709259   97386 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0531 19:03:22.709263   97386 command_runner.go:130] > # image_volumes = "mkdir"
	I0531 19:03:22.709271   97386 command_runner.go:130] > # Temporary directory to use for storing big files
	I0531 19:03:22.709275   97386 command_runner.go:130] > # big_files_temporary_dir = ""
	I0531 19:03:22.709283   97386 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0531 19:03:22.709292   97386 command_runner.go:130] > # CNI plugins.
	I0531 19:03:22.709296   97386 command_runner.go:130] > [crio.network]
	I0531 19:03:22.709302   97386 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0531 19:03:22.709311   97386 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0531 19:03:22.709318   97386 command_runner.go:130] > # cni_default_network = ""
	I0531 19:03:22.709324   97386 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0531 19:03:22.709331   97386 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0531 19:03:22.709336   97386 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0531 19:03:22.709342   97386 command_runner.go:130] > # plugin_dirs = [
	I0531 19:03:22.709348   97386 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0531 19:03:22.709354   97386 command_runner.go:130] > # ]
	I0531 19:03:22.709359   97386 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0531 19:03:22.709363   97386 command_runner.go:130] > [crio.metrics]
	I0531 19:03:22.709368   97386 command_runner.go:130] > # Globally enable or disable metrics support.
	I0531 19:03:22.709375   97386 command_runner.go:130] > # enable_metrics = false
	I0531 19:03:22.709380   97386 command_runner.go:130] > # Specify enabled metrics collectors.
	I0531 19:03:22.709386   97386 command_runner.go:130] > # Per default all metrics are enabled.
	I0531 19:03:22.709392   97386 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0531 19:03:22.709403   97386 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0531 19:03:22.709411   97386 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0531 19:03:22.709417   97386 command_runner.go:130] > # metrics_collectors = [
	I0531 19:03:22.709421   97386 command_runner.go:130] > # 	"operations",
	I0531 19:03:22.709428   97386 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0531 19:03:22.709432   97386 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0531 19:03:22.709438   97386 command_runner.go:130] > # 	"operations_errors",
	I0531 19:03:22.709442   97386 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0531 19:03:22.709447   97386 command_runner.go:130] > # 	"image_pulls_by_name",
	I0531 19:03:22.709453   97386 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0531 19:03:22.709457   97386 command_runner.go:130] > # 	"image_pulls_failures",
	I0531 19:03:22.709464   97386 command_runner.go:130] > # 	"image_pulls_successes",
	I0531 19:03:22.709468   97386 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0531 19:03:22.709474   97386 command_runner.go:130] > # 	"image_layer_reuse",
	I0531 19:03:22.709478   97386 command_runner.go:130] > # 	"containers_oom_total",
	I0531 19:03:22.709484   97386 command_runner.go:130] > # 	"containers_oom",
	I0531 19:03:22.709488   97386 command_runner.go:130] > # 	"processes_defunct",
	I0531 19:03:22.709495   97386 command_runner.go:130] > # 	"operations_total",
	I0531 19:03:22.709506   97386 command_runner.go:130] > # 	"operations_latency_seconds",
	I0531 19:03:22.709513   97386 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0531 19:03:22.709517   97386 command_runner.go:130] > # 	"operations_errors_total",
	I0531 19:03:22.709524   97386 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0531 19:03:22.709528   97386 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0531 19:03:22.709535   97386 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0531 19:03:22.709539   97386 command_runner.go:130] > # 	"image_pulls_success_total",
	I0531 19:03:22.709546   97386 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0531 19:03:22.709550   97386 command_runner.go:130] > # 	"containers_oom_count_total",
	I0531 19:03:22.709555   97386 command_runner.go:130] > # ]
	I0531 19:03:22.709560   97386 command_runner.go:130] > # The port on which the metrics server will listen.
	I0531 19:03:22.709566   97386 command_runner.go:130] > # metrics_port = 9090
	I0531 19:03:22.709571   97386 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0531 19:03:22.709577   97386 command_runner.go:130] > # metrics_socket = ""
	I0531 19:03:22.709582   97386 command_runner.go:130] > # The certificate for the secure metrics server.
	I0531 19:03:22.709590   97386 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0531 19:03:22.709599   97386 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0531 19:03:22.709605   97386 command_runner.go:130] > # certificate on any modification event.
	I0531 19:03:22.709612   97386 command_runner.go:130] > # metrics_cert = ""
	I0531 19:03:22.709619   97386 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0531 19:03:22.709624   97386 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0531 19:03:22.709630   97386 command_runner.go:130] > # metrics_key = ""
	I0531 19:03:22.709636   97386 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0531 19:03:22.709642   97386 command_runner.go:130] > [crio.tracing]
	I0531 19:03:22.709648   97386 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0531 19:03:22.709654   97386 command_runner.go:130] > # enable_tracing = false
	I0531 19:03:22.709659   97386 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0531 19:03:22.709667   97386 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0531 19:03:22.709678   97386 command_runner.go:130] > # Number of samples to collect per million spans.
	I0531 19:03:22.709686   97386 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0531 19:03:22.709691   97386 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0531 19:03:22.709697   97386 command_runner.go:130] > [crio.stats]
	I0531 19:03:22.709703   97386 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0531 19:03:22.709710   97386 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0531 19:03:22.709715   97386 command_runner.go:130] > # stats_collection_period = 0
	I0531 19:03:22.709791   97386 cni.go:84] Creating CNI manager for ""
	I0531 19:03:22.709811   97386 cni.go:136] 2 nodes found, recommending kindnet
	I0531 19:03:22.709824   97386 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:03:22.709847   97386 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-697136 NodeName:multinode-697136-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:03:22.709961   97386 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-697136-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:03:22.710016   97386 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-697136-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-697136 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 19:03:22.710067   97386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0531 19:03:22.717403   97386 command_runner.go:130] > kubeadm
	I0531 19:03:22.717424   97386 command_runner.go:130] > kubectl
	I0531 19:03:22.717430   97386 command_runner.go:130] > kubelet
	I0531 19:03:22.718035   97386 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:03:22.718083   97386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0531 19:03:22.725428   97386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0531 19:03:22.741238   97386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:03:22.756554   97386 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:03:22.759493   97386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:03:22.768923   97386 host.go:66] Checking if "multinode-697136" exists ...
	I0531 19:03:22.769141   97386 config.go:182] Loaded profile config "multinode-697136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:03:22.769151   97386 start.go:301] JoinCluster: &{Name:multinode-697136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-697136 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:03:22.769246   97386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0531 19:03:22.769278   97386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:03:22.784773   97386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa Username:docker}
	I0531 19:03:22.915481   97386 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fanr6l.7fendnnz9o4pmurz --discovery-token-ca-cert-hash sha256:762176d172e4c2e2979887de61c98a5df6783b1700b9b76d8140f24ee64a7564 
	I0531 19:03:22.920375   97386 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0531 19:03:22.920421   97386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fanr6l.7fendnnz9o4pmurz --discovery-token-ca-cert-hash sha256:762176d172e4c2e2979887de61c98a5df6783b1700b9b76d8140f24ee64a7564 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-697136-m02"
	I0531 19:03:22.953014   97386 command_runner.go:130] ! W0531 19:03:22.952574    1106 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0531 19:03:22.979200   97386 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1035-gcp\n", err: exit status 1
	I0531 19:03:23.041944   97386 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 19:03:25.163112   97386 command_runner.go:130] > [preflight] Running pre-flight checks
	I0531 19:03:25.163136   97386 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0531 19:03:25.163145   97386 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1035-gcp
	I0531 19:03:25.163153   97386 command_runner.go:130] > OS: Linux
	I0531 19:03:25.163162   97386 command_runner.go:130] > CGROUPS_CPU: enabled
	I0531 19:03:25.163172   97386 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0531 19:03:25.163183   97386 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0531 19:03:25.163191   97386 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0531 19:03:25.163197   97386 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0531 19:03:25.163204   97386 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0531 19:03:25.163213   97386 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0531 19:03:25.163220   97386 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0531 19:03:25.163227   97386 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0531 19:03:25.163236   97386 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0531 19:03:25.163248   97386 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0531 19:03:25.163258   97386 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 19:03:25.163270   97386 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 19:03:25.163277   97386 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0531 19:03:25.163291   97386 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0531 19:03:25.163295   97386 command_runner.go:130] > This node has joined the cluster:
	I0531 19:03:25.163301   97386 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0531 19:03:25.163307   97386 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0531 19:03:25.163313   97386 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0531 19:03:25.163334   97386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fanr6l.7fendnnz9o4pmurz --discovery-token-ca-cert-hash sha256:762176d172e4c2e2979887de61c98a5df6783b1700b9b76d8140f24ee64a7564 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-697136-m02": (2.242897225s)
	I0531 19:03:25.163355   97386 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0531 19:03:25.324051   97386 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0531 19:03:25.324083   97386 start.go:303] JoinCluster complete in 2.554931539s
	I0531 19:03:25.324093   97386 cni.go:84] Creating CNI manager for ""
	I0531 19:03:25.324098   97386 cni.go:136] 2 nodes found, recommending kindnet
	I0531 19:03:25.324138   97386 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 19:03:25.327556   97386 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0531 19:03:25.327578   97386 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0531 19:03:25.327593   97386 command_runner.go:130] > Device: 33h/51d	Inode: 804304      Links: 1
	I0531 19:03:25.327604   97386 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:03:25.327613   97386 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0531 19:03:25.327621   97386 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0531 19:03:25.327632   97386 command_runner.go:130] > Change: 2023-05-31 18:43:50.927836386 +0000
	I0531 19:03:25.327642   97386 command_runner.go:130] >  Birth: 2023-05-31 18:43:50.903834622 +0000
	I0531 19:03:25.327720   97386 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0531 19:03:25.327733   97386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 19:03:25.344415   97386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 19:03:25.607636   97386 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0531 19:03:25.607658   97386 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0531 19:03:25.607665   97386 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0531 19:03:25.607670   97386 command_runner.go:130] > daemonset.apps/kindnet configured
	I0531 19:03:25.607982   97386 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:03:25.608191   97386 kapi.go:59] client config for multinode-697136: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.key", CAFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b95a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:03:25.608490   97386 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0531 19:03:25.608504   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:25.608512   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:25.608519   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:25.610236   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:25.610253   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:25.610260   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:25.610266   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:25.610271   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:25.610277   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:25.610285   97386 round_trippers.go:580]     Content-Length: 291
	I0531 19:03:25.610293   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:25 GMT
	I0531 19:03:25.610305   97386 round_trippers.go:580]     Audit-Id: 97fb6494-fea8-452c-a7e0-f65ae13a5e01
	I0531 19:03:25.610330   97386 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"563d0303-a933-47e8-b089-4856a60f52d0","resourceVersion":"413","creationTimestamp":"2023-05-31T19:02:23Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0531 19:03:25.610420   97386 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-697136" context rescaled to 1 replicas
	I0531 19:03:25.610448   97386 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0531 19:03:25.613159   97386 out.go:177] * Verifying Kubernetes components...
	I0531 19:03:25.614971   97386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:03:25.625490   97386 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:03:25.625746   97386 kapi.go:59] client config for multinode-697136: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/multinode-697136/client.key", CAFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b95a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:03:25.625957   97386 node_ready.go:35] waiting up to 6m0s for node "multinode-697136-m02" to be "Ready" ...
	I0531 19:03:25.626010   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136-m02
	I0531 19:03:25.626018   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:25.626027   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:25.626033   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:25.628099   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:25.628117   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:25.628123   97386 round_trippers.go:580]     Audit-Id: 9b7fe842-9575-421a-bbb5-1dd1fe9b11d9
	I0531 19:03:25.628128   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:25.628134   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:25.628139   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:25.628146   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:25.628158   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:25 GMT
	I0531 19:03:25.628353   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136-m02","uid":"dbe45c9e-33b4-4d39-99de-e38f0196c522","resourceVersion":"448","creationTimestamp":"2023-05-31T19:03:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:03:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:03:24Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5101 chars]
	I0531 19:03:26.129445   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136-m02
	I0531 19:03:26.129470   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:26.129482   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:26.129492   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:26.131829   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:26.131853   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:26.131865   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:26.131874   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:26 GMT
	I0531 19:03:26.131889   97386 round_trippers.go:580]     Audit-Id: c4deb485-c7f8-4ea6-9847-34acca51ccca
	I0531 19:03:26.131902   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:26.131913   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:26.131928   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:26.132027   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136-m02","uid":"dbe45c9e-33b4-4d39-99de-e38f0196c522","resourceVersion":"448","creationTimestamp":"2023-05-31T19:03:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:03:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:03:24Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5101 chars]
	I0531 19:03:26.629576   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136-m02
	I0531 19:03:26.629601   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:26.629613   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:26.629623   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:26.631904   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:26.631929   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:26.631939   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:26.631949   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:26.631957   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:26.631968   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:26 GMT
	I0531 19:03:26.631983   97386 round_trippers.go:580]     Audit-Id: bcdbfc90-ee97-4f74-aa1d-7a32c7198acf
	I0531 19:03:26.631992   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:26.632110   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136-m02","uid":"dbe45c9e-33b4-4d39-99de-e38f0196c522","resourceVersion":"458","creationTimestamp":"2023-05-31T19:03:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:03:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5210 chars]
	I0531 19:03:27.129554   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136-m02
	I0531 19:03:27.129575   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.129583   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.129589   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.131905   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:27.131929   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.131936   97386 round_trippers.go:580]     Audit-Id: aa7eb0a5-ffa9-4efc-a35c-f62da20829ff
	I0531 19:03:27.131942   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.131949   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.131958   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.131967   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.131978   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.132199   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136-m02","uid":"dbe45c9e-33b4-4d39-99de-e38f0196c522","resourceVersion":"458","creationTimestamp":"2023-05-31T19:03:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:03:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5210 chars]
	I0531 19:03:27.629820   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136-m02
	I0531 19:03:27.629839   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.629847   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.629854   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.632262   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:27.632283   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.632312   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.632322   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.632330   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.632338   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.632354   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.632363   97386 round_trippers.go:580]     Audit-Id: c5898b16-e230-47f6-bab1-992c2acd509f
	I0531 19:03:27.632460   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136-m02","uid":"dbe45c9e-33b4-4d39-99de-e38f0196c522","resourceVersion":"471","creationTimestamp":"2023-05-31T19:03:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:03:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5296 chars]
	I0531 19:03:27.632734   97386 node_ready.go:49] node "multinode-697136-m02" has status "Ready":"True"
	I0531 19:03:27.632747   97386 node_ready.go:38] duration metric: took 2.006778269s waiting for node "multinode-697136-m02" to be "Ready" ...
	I0531 19:03:27.632755   97386 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:03:27.632802   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0531 19:03:27.632809   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.632816   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.632822   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.638265   97386 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0531 19:03:27.638291   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.638301   97386 round_trippers.go:580]     Audit-Id: 114b9c52-c808-4804-8ce2-7c79950bed01
	I0531 19:03:27.638310   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.638318   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.638327   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.638344   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.638353   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.638955   97386 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"475"},"items":[{"metadata":{"name":"coredns-5d78c9869d-fntsv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3a603b3b-cd36-4c4e-9c48-272ebf4323ee","resourceVersion":"409","creationTimestamp":"2023-05-31T19:02:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"0c6b5e3a-feb8-476c-a469-98dd4afd483c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6b5e3a-feb8-476c-a469-98dd4afd483c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I0531 19:03:27.641848   97386 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-fntsv" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:27.641983   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-fntsv
	I0531 19:03:27.642007   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.642025   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.642042   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.656255   97386 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0531 19:03:27.656281   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.656310   97386 round_trippers.go:580]     Audit-Id: 138b204c-3ffc-42f1-aca0-4c18ee34fead
	I0531 19:03:27.656320   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.656329   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.656338   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.656346   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.656370   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.656481   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-fntsv","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"3a603b3b-cd36-4c4e-9c48-272ebf4323ee","resourceVersion":"409","creationTimestamp":"2023-05-31T19:02:37Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"0c6b5e3a-feb8-476c-a469-98dd4afd483c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6b5e3a-feb8-476c-a469-98dd4afd483c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0531 19:03:27.656901   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:27.656911   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.656918   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.656924   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.658714   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:27.658733   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.658741   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.658750   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.658763   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.658777   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.658789   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.658802   97386 round_trippers.go:580]     Audit-Id: d987e8c7-3841-4c70-9dc5-20d265e298ea
	I0531 19:03:27.659006   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:27.659330   97386 pod_ready.go:92] pod "coredns-5d78c9869d-fntsv" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:27.659343   97386 pod_ready.go:81] duration metric: took 17.435776ms waiting for pod "coredns-5d78c9869d-fntsv" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:27.659363   97386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:27.659412   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-697136
	I0531 19:03:27.659419   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.659426   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.659436   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.661278   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:27.661297   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.661308   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.661320   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.661330   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.661336   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.661346   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.661358   97386 round_trippers.go:580]     Audit-Id: 7e25f269-f554-4d8d-b5d0-fa0ef00eee56
	I0531 19:03:27.661452   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-697136","namespace":"kube-system","uid":"ccd089f5-6d2e-49be-a654-fab118994a39","resourceVersion":"283","creationTimestamp":"2023-05-31T19:02:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"66aaa6d6b901acd2f56b209f1f1672ea","kubernetes.io/config.mirror":"66aaa6d6b901acd2f56b209f1f1672ea","kubernetes.io/config.seen":"2023-05-31T19:02:23.306241078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0531 19:03:27.661772   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:27.661783   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.661790   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.661796   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.663263   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:27.663282   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.663292   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.663298   97386 round_trippers.go:580]     Audit-Id: 26a8b117-ea60-4bd2-86de-cf99b7ac06b9
	I0531 19:03:27.663303   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.663309   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.663317   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.663325   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.663455   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:27.663750   97386 pod_ready.go:92] pod "etcd-multinode-697136" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:27.663762   97386 pod_ready.go:81] duration metric: took 4.388494ms waiting for pod "etcd-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:27.663776   97386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:27.663822   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-697136
	I0531 19:03:27.663830   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.663836   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.663842   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.665395   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:27.665409   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.665420   97386 round_trippers.go:580]     Audit-Id: 43b77f38-0cf9-4aa0-9f96-5c51cbf6e1c4
	I0531 19:03:27.665426   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.665431   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.665436   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.665443   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.665449   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.665597   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-697136","namespace":"kube-system","uid":"2b24f348-410a-4de9-9d78-a304f5a20e2f","resourceVersion":"258","creationTimestamp":"2023-05-31T19:02:23Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"d1e9e3f7f9a77751cd5d1911c06d4265","kubernetes.io/config.mirror":"d1e9e3f7f9a77751cd5d1911c06d4265","kubernetes.io/config.seen":"2023-05-31T19:02:23.306242754Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0531 19:03:27.665933   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:27.665944   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.665950   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.665957   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.667411   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:27.667431   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.667441   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.667450   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.667458   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.667466   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.667478   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.667493   97386 round_trippers.go:580]     Audit-Id: a9471ca9-3498-4202-aa34-8abc2f32bd12
	I0531 19:03:27.667592   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:27.667836   97386 pod_ready.go:92] pod "kube-apiserver-multinode-697136" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:27.667846   97386 pod_ready.go:81] duration metric: took 4.060133ms waiting for pod "kube-apiserver-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:27.667854   97386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:27.667892   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-697136
	I0531 19:03:27.667899   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.667906   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.667912   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.669423   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:27.669443   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.669453   97386 round_trippers.go:580]     Audit-Id: 54351ab0-7665-418e-aeaa-f74a6016d792
	I0531 19:03:27.669461   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.669469   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.669477   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.669486   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.669495   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.669624   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-697136","namespace":"kube-system","uid":"b6cb9f23-df26-4062-b101-d862a5798d37","resourceVersion":"274","creationTimestamp":"2023-05-31T19:02:23Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cbf13c18b5a0e9a44dd9b79914da83aa","kubernetes.io/config.mirror":"cbf13c18b5a0e9a44dd9b79914da83aa","kubernetes.io/config.seen":"2023-05-31T19:02:23.306243874Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0531 19:03:27.670020   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:27.670033   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.670039   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.670045   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.671469   97386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 19:03:27.671490   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.671500   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.671509   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.671520   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.671528   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.671539   97386 round_trippers.go:580]     Audit-Id: adcd4662-d861-4133-a409-a3c1b9f28808
	I0531 19:03:27.671549   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.671636   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:27.671898   97386 pod_ready.go:92] pod "kube-controller-manager-multinode-697136" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:27.671911   97386 pod_ready.go:81] duration metric: took 4.051932ms waiting for pod "kube-controller-manager-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:27.671918   97386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tgk57" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:27.830287   97386 request.go:628] Waited for 158.316131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgk57
	I0531 19:03:27.830373   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgk57
	I0531 19:03:27.830388   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:27.830399   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:27.830411   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:27.832908   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:27.832926   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:27.832932   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:27.832938   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:27.832944   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:27.832949   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:27.832957   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:27 GMT
	I0531 19:03:27.832962   97386 round_trippers.go:580]     Audit-Id: 409853aa-971f-41a0-88ec-079fe1330a6f
	I0531 19:03:27.833099   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tgk57","generateName":"kube-proxy-","namespace":"kube-system","uid":"47badf8b-17e5-49d3-bdde-743b58a05b7d","resourceVersion":"367","creationTimestamp":"2023-05-31T19:02:37Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"25b43a82-6e41-4a6d-abee-90da0dfec603","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25b43a82-6e41-4a6d-abee-90da0dfec603\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5508 chars]
	I0531 19:03:28.029830   97386 request.go:628] Waited for 196.298802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:28.029888   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:28.029894   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:28.029902   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:28.029908   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:28.032364   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:28.032390   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:28.032402   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:28 GMT
	I0531 19:03:28.032412   97386 round_trippers.go:580]     Audit-Id: feff4b8b-92dd-470f-b03c-b13dbefe6088
	I0531 19:03:28.032428   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:28.032437   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:28.032450   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:28.032464   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:28.032583   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:28.032927   97386 pod_ready.go:92] pod "kube-proxy-tgk57" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:28.032942   97386 pod_ready.go:81] duration metric: took 361.018485ms waiting for pod "kube-proxy-tgk57" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:28.032952   97386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wc7m5" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:28.230379   97386 request.go:628] Waited for 197.344954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wc7m5
	I0531 19:03:28.230436   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wc7m5
	I0531 19:03:28.230440   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:28.230448   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:28.230454   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:28.232787   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:28.232812   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:28.232822   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:28.232832   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:28.232839   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:28 GMT
	I0531 19:03:28.232848   97386 round_trippers.go:580]     Audit-Id: dd80cbc9-02da-4cef-829d-be7c0e46f221
	I0531 19:03:28.232863   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:28.232872   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:28.233004   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wc7m5","generateName":"kube-proxy-","namespace":"kube-system","uid":"75452ad6-132a-434a-a2a9-16a3a14d0395","resourceVersion":"472","creationTimestamp":"2023-05-31T19:03:24Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"25b43a82-6e41-4a6d-abee-90da0dfec603","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:03:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25b43a82-6e41-4a6d-abee-90da0dfec603\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5516 chars]
	I0531 19:03:28.430753   97386 request.go:628] Waited for 197.339424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-697136-m02
	I0531 19:03:28.430837   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136-m02
	I0531 19:03:28.430845   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:28.430853   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:28.430859   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:28.433067   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:28.433084   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:28.433091   97386 round_trippers.go:580]     Audit-Id: 1d956229-7717-4885-92d7-017e7a6cb123
	I0531 19:03:28.433097   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:28.433102   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:28.433107   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:28.433118   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:28.433130   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:28 GMT
	I0531 19:03:28.433282   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136-m02","uid":"dbe45c9e-33b4-4d39-99de-e38f0196c522","resourceVersion":"471","creationTimestamp":"2023-05-31T19:03:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:03:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5296 chars]
	I0531 19:03:28.433603   97386 pod_ready.go:92] pod "kube-proxy-wc7m5" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:28.433619   97386 pod_ready.go:81] duration metric: took 400.658098ms waiting for pod "kube-proxy-wc7m5" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:28.433629   97386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:28.629987   97386 request.go:628] Waited for 196.288875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-697136
	I0531 19:03:28.630052   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-697136
	I0531 19:03:28.630060   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:28.630068   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:28.630078   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:28.632705   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:28.632723   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:28.632730   97386 round_trippers.go:580]     Audit-Id: 5ee97259-6778-4d9d-892a-2f18298d8caf
	I0531 19:03:28.632737   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:28.632746   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:28.632756   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:28.632768   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:28.632778   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:28 GMT
	I0531 19:03:28.632892   97386 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-697136","namespace":"kube-system","uid":"e6c0e63a-e8fc-4aea-b1f0-573963eb4ad9","resourceVersion":"290","creationTimestamp":"2023-05-31T19:02:23Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e10d631853cc4f1206606e3c2f5048c1","kubernetes.io/config.mirror":"e10d631853cc4f1206606e3c2f5048c1","kubernetes.io/config.seen":"2023-05-31T19:02:23.306232698Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-31T19:02:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0531 19:03:28.830628   97386 request.go:628] Waited for 197.350026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:28.830693   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-697136
	I0531 19:03:28.830698   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:28.830706   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:28.830714   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:28.833089   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:28.833108   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:28.833115   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:28.833124   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:28 GMT
	I0531 19:03:28.833129   97386 round_trippers.go:580]     Audit-Id: 64cf941e-afb6-4127-9513-fa2390e1f41f
	I0531 19:03:28.833142   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:28.833150   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:28.833159   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:28.833295   97386 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-05-31T19:02:20Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0531 19:03:28.833619   97386 pod_ready.go:92] pod "kube-scheduler-multinode-697136" in "kube-system" namespace has status "Ready":"True"
	I0531 19:03:28.833633   97386 pod_ready.go:81] duration metric: took 399.991371ms waiting for pod "kube-scheduler-multinode-697136" in "kube-system" namespace to be "Ready" ...
	I0531 19:03:28.833644   97386 pod_ready.go:38] duration metric: took 1.200880428s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:03:28.833665   97386 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:03:28.833712   97386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:03:28.844390   97386 system_svc.go:56] duration metric: took 10.710275ms WaitForService to wait for kubelet.
	I0531 19:03:28.844442   97386 kubeadm.go:581] duration metric: took 3.233960462s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:03:28.844466   97386 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:03:29.030901   97386 request.go:628] Waited for 186.346278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0531 19:03:29.030952   97386 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0531 19:03:29.030957   97386 round_trippers.go:469] Request Headers:
	I0531 19:03:29.030965   97386 round_trippers.go:473]     Accept: application/json, */*
	I0531 19:03:29.030972   97386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 19:03:29.033170   97386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 19:03:29.033187   97386 round_trippers.go:577] Response Headers:
	I0531 19:03:29.033194   97386 round_trippers.go:580]     Cache-Control: no-cache, private
	I0531 19:03:29.033200   97386 round_trippers.go:580]     Content-Type: application/json
	I0531 19:03:29.033205   97386 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0ad7ae55-617e-439e-96e2-19797c7dcc89
	I0531 19:03:29.033210   97386 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 080b38be-0fdd-45eb-9cd6-867be5c02c9e
	I0531 19:03:29.033215   97386 round_trippers.go:580]     Date: Wed, 31 May 2023 19:03:29 GMT
	I0531 19:03:29.033221   97386 round_trippers.go:580]     Audit-Id: 387bd3d5-1b12-404b-a377-623fc720a84c
	I0531 19:03:29.033375   97386 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"475"},"items":[{"metadata":{"name":"multinode-697136","uid":"02ea236c-3028-4f0b-a15c-41057d8730eb","resourceVersion":"390","creationTimestamp":"2023-05-31T19:02:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-697136","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7022875d4a054c2d518e5e5a7b9d500799d50140","minikube.k8s.io/name":"multinode-697136","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_31T19_02_24_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I0531 19:03:29.033820   97386 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0531 19:03:29.033833   97386 node_conditions.go:123] node cpu capacity is 8
	I0531 19:03:29.033840   97386 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0531 19:03:29.033844   97386 node_conditions.go:123] node cpu capacity is 8
	I0531 19:03:29.033847   97386 node_conditions.go:105] duration metric: took 189.377698ms to run NodePressure ...
	I0531 19:03:29.033856   97386 start.go:228] waiting for startup goroutines ...
	I0531 19:03:29.033886   97386 start.go:242] writing updated cluster config ...
	I0531 19:03:29.034158   97386 ssh_runner.go:195] Run: rm -f paused
	I0531 19:03:29.077770   97386 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0531 19:03:29.080673   97386 out.go:177] * Done! kubectl is now configured to use "multinode-697136" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* May 31 19:03:09 multinode-697136 crio[957]: time="2023-05-31 19:03:09.044969476Z" level=info msg="Starting container: 0507a955e50e289094660f355bec348df26300fc145efb95fab37fc5b74486d0" id=8f291a84-d1b3-4ef5-b014-31ac8632a45a name=/runtime.v1.RuntimeService/StartContainer
	May 31 19:03:09 multinode-697136 crio[957]: time="2023-05-31 19:03:09.049695306Z" level=info msg="Created container 6aa7fa1aea6bcffab91c5fa9262ab8692e65287420a42489c6bf29767a948193: kube-system/coredns-5d78c9869d-fntsv/coredns" id=6136ca61-7107-4d7a-bc81-3fa4f4deb4c1 name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:03:09 multinode-697136 crio[957]: time="2023-05-31 19:03:09.050161760Z" level=info msg="Starting container: 6aa7fa1aea6bcffab91c5fa9262ab8692e65287420a42489c6bf29767a948193" id=e0966af7-9151-4075-88fc-559c9d5f25f8 name=/runtime.v1.RuntimeService/StartContainer
	May 31 19:03:09 multinode-697136 crio[957]: time="2023-05-31 19:03:09.053549496Z" level=info msg="Started container" PID=2353 containerID=0507a955e50e289094660f355bec348df26300fc145efb95fab37fc5b74486d0 description=kube-system/storage-provisioner/storage-provisioner id=8f291a84-d1b3-4ef5-b014-31ac8632a45a name=/runtime.v1.RuntimeService/StartContainer sandboxID=eaa1d479904a178951f9febae3aea0b7c68c144bc64a25a00621347f800af1de
	May 31 19:03:09 multinode-697136 crio[957]: time="2023-05-31 19:03:09.059077354Z" level=info msg="Started container" PID=2362 containerID=6aa7fa1aea6bcffab91c5fa9262ab8692e65287420a42489c6bf29767a948193 description=kube-system/coredns-5d78c9869d-fntsv/coredns id=e0966af7-9151-4075-88fc-559c9d5f25f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f60ae27870d8d32f97860d3b5a485d43cb86a14d2f4e80936fdd3836ac4c3b33
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.075270148Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-jsm9c/POD" id=3bea967a-5be5-49ea-a9e3-21e8db8f7f65 name=/runtime.v1.RuntimeService/RunPodSandbox
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.075337687Z" level=warning msg="Allowed annotations are specified for workload []"
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.091025486Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-jsm9c Namespace:default ID:90bca19e380b6b724880b674f671d1144248d9df67bea014daf28ff9f4e12b01 UID:8e405caa-5cfb-423b-9f3f-bee6a4ccdd3f NetNS:/var/run/netns/9d659964-bbfc-4082-87a6-2d5a00c50d19 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.091060933Z" level=info msg="Adding pod default_busybox-67b7f59bb-jsm9c to CNI network \"kindnet\" (type=ptp)"
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.099705640Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-jsm9c Namespace:default ID:90bca19e380b6b724880b674f671d1144248d9df67bea014daf28ff9f4e12b01 UID:8e405caa-5cfb-423b-9f3f-bee6a4ccdd3f NetNS:/var/run/netns/9d659964-bbfc-4082-87a6-2d5a00c50d19 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.099832970Z" level=info msg="Checking pod default_busybox-67b7f59bb-jsm9c for CNI network kindnet (type=ptp)"
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.119007085Z" level=info msg="Ran pod sandbox 90bca19e380b6b724880b674f671d1144248d9df67bea014daf28ff9f4e12b01 with infra container: default/busybox-67b7f59bb-jsm9c/POD" id=3bea967a-5be5-49ea-a9e3-21e8db8f7f65 name=/runtime.v1.RuntimeService/RunPodSandbox
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.120038468Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=390c63b0-ac2a-433c-9d40-76deeb9fca32 name=/runtime.v1.ImageService/ImageStatus
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.120343549Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=390c63b0-ac2a-433c-9d40-76deeb9fca32 name=/runtime.v1.ImageService/ImageStatus
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.121170640Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=56257f7a-b6d8-4cf2-9a72-8ff896ad5184 name=/runtime.v1.ImageService/PullImage
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.125177808Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.273345983Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.662208651Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=56257f7a-b6d8-4cf2-9a72-8ff896ad5184 name=/runtime.v1.ImageService/PullImage
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.663185753Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=24537c1f-263b-4dc5-a39d-767863ff8f91 name=/runtime.v1.ImageService/ImageStatus
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.663755375Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=24537c1f-263b-4dc5-a39d-767863ff8f91 name=/runtime.v1.ImageService/ImageStatus
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.664616629Z" level=info msg="Creating container: default/busybox-67b7f59bb-jsm9c/busybox" id=204ed78f-e5ff-4feb-8994-3e5dc913cded name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.664736475Z" level=warning msg="Allowed annotations are specified for workload []"
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.718556766Z" level=info msg="Created container a6ebada6e50a8a34d1f8cf2e91d9076201d14686159f845d2f977d55fa944353: default/busybox-67b7f59bb-jsm9c/busybox" id=204ed78f-e5ff-4feb-8994-3e5dc913cded name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.719411234Z" level=info msg="Starting container: a6ebada6e50a8a34d1f8cf2e91d9076201d14686159f845d2f977d55fa944353" id=a81d9393-e95c-4043-92ad-0365f5a1f04c name=/runtime.v1.RuntimeService/StartContainer
	May 31 19:03:30 multinode-697136 crio[957]: time="2023-05-31 19:03:30.727999492Z" level=info msg="Started container" PID=2525 containerID=a6ebada6e50a8a34d1f8cf2e91d9076201d14686159f845d2f977d55fa944353 description=default/busybox-67b7f59bb-jsm9c/busybox id=a81d9393-e95c-4043-92ad-0365f5a1f04c name=/runtime.v1.RuntimeService/StartContainer sandboxID=90bca19e380b6b724880b674f671d1144248d9df67bea014daf28ff9f4e12b01
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a6ebada6e50a8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   90bca19e380b6       busybox-67b7f59bb-jsm9c
	6aa7fa1aea6bc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      25 seconds ago       Running             coredns                   0                   f60ae27870d8d       coredns-5d78c9869d-fntsv
	0507a955e50e2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      25 seconds ago       Running             storage-provisioner       0                   eaa1d479904a1       storage-provisioner
	5fe24e46059ce       b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee                                      57 seconds ago       Running             kube-proxy                0                   c212366743616       kube-proxy-tgk57
	11ec60c867810       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      57 seconds ago       Running             kindnet-cni               0                   67a759a5ce8f8       kindnet-hgzvz
	ed6eefe8b6036       ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12                                      About a minute ago   Running             kube-controller-manager   0                   2c3244719e615       kube-controller-manager-multinode-697136
	e95e759b28dae       c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370                                      About a minute ago   Running             kube-apiserver            0                   2a4835918abf9       kube-apiserver-multinode-697136
	626bf80f024a9       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      About a minute ago   Running             etcd                      0                   fbc64874d5f70       etcd-multinode-697136
	7dd3be9739ad1       89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0                                      About a minute ago   Running             kube-scheduler            0                   34929405424ff       kube-scheduler-multinode-697136
	
	* 
	* ==> coredns [6aa7fa1aea6bcffab91c5fa9262ab8692e65287420a42489c6bf29767a948193] <==
	* [INFO] 10.244.0.3:57679 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104995s
	[INFO] 10.244.1.2:58431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113238s
	[INFO] 10.244.1.2:47585 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001772237s
	[INFO] 10.244.1.2:51524 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076722s
	[INFO] 10.244.1.2:51909 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062945s
	[INFO] 10.244.1.2:54159 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001264497s
	[INFO] 10.244.1.2:56657 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091966s
	[INFO] 10.244.1.2:38497 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080958s
	[INFO] 10.244.1.2:36290 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072281s
	[INFO] 10.244.0.3:52671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099801s
	[INFO] 10.244.0.3:52499 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073611s
	[INFO] 10.244.0.3:57290 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054223s
	[INFO] 10.244.0.3:42059 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000037052s
	[INFO] 10.244.1.2:40758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013416s
	[INFO] 10.244.1.2:35875 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097539s
	[INFO] 10.244.1.2:37313 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076137s
	[INFO] 10.244.1.2:60354 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000714s
	[INFO] 10.244.0.3:36317 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092923s
	[INFO] 10.244.0.3:43529 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116377s
	[INFO] 10.244.0.3:44280 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082214s
	[INFO] 10.244.0.3:43674 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092607s
	[INFO] 10.244.1.2:56559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118096s
	[INFO] 10.244.1.2:42032 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000064115s
	[INFO] 10.244.1.2:51376 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076662s
	[INFO] 10.244.1.2:55853 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000075903s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-697136
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-697136
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140
	                    minikube.k8s.io/name=multinode-697136
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_31T19_02_24_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 19:02:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-697136
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 19:03:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 19:03:08 +0000   Wed, 31 May 2023 19:02:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 19:03:08 +0000   Wed, 31 May 2023 19:02:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 19:03:08 +0000   Wed, 31 May 2023 19:02:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 19:03:08 +0000   Wed, 31 May 2023 19:03:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-697136
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	System Info:
	  Machine ID:                 188065c05448443fbbbfe50ad8adee68
	  System UUID:                887bb828-a58f-41c6-80c1-c8b1d2c24adc
	  Boot ID:                    858e553b-6392-44c5-a611-8f56a2b0fab6
	  Kernel Version:             5.15.0-1035-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-jsm9c                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5d78c9869d-fntsv                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     57s
	  kube-system                 etcd-multinode-697136                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 kindnet-hgzvz                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      57s
	  kube-system                 kube-apiserver-multinode-697136             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-multinode-697136    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-tgk57                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-multinode-697136             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 56s   kube-proxy       
	  Normal  Starting                 71s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s   kubelet          Node multinode-697136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s   kubelet          Node multinode-697136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s   kubelet          Node multinode-697136 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s   node-controller  Node multinode-697136 event: Registered Node multinode-697136 in Controller
	  Normal  NodeReady                26s   kubelet          Node multinode-697136 status is now: NodeReady
	
	
	Name:               multinode-697136-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-697136-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 19:03:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-697136-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 19:03:27 +0000   Wed, 31 May 2023 19:03:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 19:03:27 +0000   Wed, 31 May 2023 19:03:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 19:03:27 +0000   Wed, 31 May 2023 19:03:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 19:03:27 +0000   Wed, 31 May 2023 19:03:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-697136-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	System Info:
	  Machine ID:                 f9b5c46adf0848cfbbdbe2c9b558d917
	  System UUID:                c1c8aa31-eab2-4aa6-9369-82690dca0f65
	  Boot ID:                    858e553b-6392-44c5-a611-8f56a2b0fab6
	  Kernel Version:             5.15.0-1035-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-rvdrs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-47nmj              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10s
	  kube-system                 kube-proxy-wc7m5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 8s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  10s (x5 over 11s)  kubelet          Node multinode-697136-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x5 over 11s)  kubelet          Node multinode-697136-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x5 over 11s)  kubelet          Node multinode-697136-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8s                 node-controller  Node multinode-697136-m02 event: Registered Node multinode-697136-m02 in Controller
	  Normal  NodeReady                7s                 kubelet          Node multinode-697136-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004959] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.007966] FS-Cache: N-cookie d=0000000091dc95f5{9p.inode} n=00000000d3cdecde
	[  +0.008741] FS-Cache: N-key=[8] '74a00f0200000000'
	[  +0.313415] FS-Cache: Duplicate cookie detected
	[  +0.004687] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006753] FS-Cache: O-cookie d=0000000091dc95f5{9p.inode} n=000000009f8e728a
	[  +0.007402] FS-Cache: O-key=[8] '83a00f0200000000'
	[  +0.006311] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006577] FS-Cache: N-cookie d=0000000091dc95f5{9p.inode} n=00000000594058f6
	[  +0.007352] FS-Cache: N-key=[8] '83a00f0200000000'
	[ +19.428279] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[May31 18:54] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[  +1.028188] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[  +2.015837] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[  +4.255686] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[May31 18:55] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[ +16.126833] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	[ +33.277509] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: b2 07 cb 80 8c 32 ae f3 bc ce 59 6d 08 00
	
	* 
	* ==> etcd [626bf80f024a93fa5d608a4d64a0bfee1c454f334df9ad9c252d295b4206d730] <==
	* {"level":"info","ts":"2023-05-31T19:02:18.775Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-05-31T19:02:18.776Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-31T19:02:18.776Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-05-31T19:02:18.776Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-05-31T19:02:18.776Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-31T19:02:18.777Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-31T19:02:19.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-05-31T19:02:19.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-05-31T19:02:19.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-05-31T19:02:19.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-05-31T19:02:19.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-05-31T19:02:19.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-05-31T19:02:19.460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-05-31T19:02:19.461Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:02:19.462Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-697136 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-31T19:02:19.462Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:02:19.462Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:02:19.462Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-31T19:02:19.462Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-31T19:02:19.462Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:02:19.462Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:02:19.462Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:02:19.463Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-31T19:02:19.463Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-05-31T19:03:15.265Z","caller":"traceutil/trace.go:171","msg":"trace[796306190] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"154.932866ms","start":"2023-05-31T19:03:15.110Z","end":"2023-05-31T19:03:15.265Z","steps":["trace[796306190] 'process raft request'  (duration: 154.829165ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:03:35 up 46 min,  0 users,  load average: 0.78, 1.06, 0.78
	Linux multinode-697136 5.15.0-1035-gcp #43~20.04.1-Ubuntu SMP Mon May 22 16:49:11 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [11ec60c867810444e7d5d6c81ada73c39b915be26436a1c2d8fabb7d77350ff3] <==
	* I0531 19:02:37.854804       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 19:02:37.854889       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0531 19:02:37.855022       1 main.go:116] setting mtu 1500 for CNI 
	I0531 19:02:37.855050       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 19:02:37.941758       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0531 19:03:08.175898       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0531 19:03:08.184280       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0531 19:03:08.184323       1 main.go:227] handling current node
	I0531 19:03:18.199383       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0531 19:03:18.199406       1 main.go:227] handling current node
	I0531 19:03:28.211993       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0531 19:03:28.212017       1 main.go:227] handling current node
	I0531 19:03:28.212025       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0531 19:03:28.212030       1 main.go:250] Node multinode-697136-m02 has CIDR [10.244.1.0/24] 
	I0531 19:03:28.212180       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [e95e759b28daefcb2a32d79f11150e23e9f6f926263234c3defb955296a19e9a] <==
	* I0531 19:02:20.642187       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0531 19:02:20.642270       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0531 19:02:20.642314       1 cache.go:39] Caches are synced for autoregister controller
	I0531 19:02:20.642534       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 19:02:20.642564       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0531 19:02:20.644097       1 controller.go:624] quota admission added evaluator for: namespaces
	I0531 19:02:20.644348       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0531 19:02:20.652106       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0531 19:02:20.742072       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 19:02:21.283856       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 19:02:21.503005       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0531 19:02:21.507898       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0531 19:02:21.507914       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0531 19:02:21.914038       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 19:02:21.949968       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 19:02:22.061479       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0531 19:02:22.067116       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0531 19:02:22.067964       1 controller.go:624] quota admission added evaluator for: endpoints
	I0531 19:02:22.071680       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 19:02:22.555502       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0531 19:02:23.258423       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0531 19:02:23.270483       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0531 19:02:23.280417       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0531 19:02:36.199210       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0531 19:02:37.097772       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [ed6eefe8b6036dde731feb49e65e2e72253b1dfbd30c166085223effc6bc70f2] <==
	* I0531 19:02:36.346957       1 shared_informer.go:318] Caches are synced for ephemeral
	I0531 19:02:36.346994       1 shared_informer.go:318] Caches are synced for attach detach
	I0531 19:02:36.347472       1 shared_informer.go:318] Caches are synced for persistent volume
	I0531 19:02:36.349904       1 shared_informer.go:318] Caches are synced for resource quota
	I0531 19:02:36.664382       1 shared_informer.go:318] Caches are synced for garbage collector
	I0531 19:02:36.664412       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0531 19:02:36.667541       1 shared_informer.go:318] Caches are synced for garbage collector
	I0531 19:02:37.105589       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tgk57"
	I0531 19:02:37.107349       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hgzvz"
	I0531 19:02:37.151971       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-f68b6"
	I0531 19:02:37.157883       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-fntsv"
	I0531 19:02:37.416685       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0531 19:02:37.429232       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-f68b6"
	I0531 19:03:11.213271       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0531 19:03:24.976907       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-697136-m02\" does not exist"
	I0531 19:03:24.983339       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-697136-m02" podCIDRs=[10.244.1.0/24]
	I0531 19:03:24.988436       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-47nmj"
	I0531 19:03:24.988541       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wc7m5"
	I0531 19:03:26.215476       1 event.go:307] "Event occurred" object="multinode-697136-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-697136-m02 event: Registered Node multinode-697136-m02 in Controller"
	I0531 19:03:26.215534       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-697136-m02"
	W0531 19:03:27.338542       1 topologycache.go:232] Can't get CPU or zone information for multinode-697136-m02 node
	I0531 19:03:29.751550       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0531 19:03:29.760803       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-rvdrs"
	I0531 19:03:29.766496       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-jsm9c"
	I0531 19:03:31.227735       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-rvdrs" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-rvdrs"
	
	* 
	* ==> kube-proxy [5fe24e46059ce693f573a1dd08e1ea85d83cd5f11a04089f6e793bc9898b73cc] <==
	* I0531 19:02:38.065758       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0531 19:02:38.065857       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0531 19:02:38.065901       1 server_others.go:551] "Using iptables proxy"
	I0531 19:02:38.244810       1 server_others.go:190] "Using iptables Proxier"
	I0531 19:02:38.244874       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 19:02:38.244888       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0531 19:02:38.244907       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0531 19:02:38.244957       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 19:02:38.245918       1 server.go:657] "Version info" version="v1.27.2"
	I0531 19:02:38.246197       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:02:38.247471       1 config.go:188] "Starting service config controller"
	I0531 19:02:38.247499       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0531 19:02:38.247591       1 config.go:97] "Starting endpoint slice config controller"
	I0531 19:02:38.247639       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0531 19:02:38.248368       1 config.go:315] "Starting node config controller"
	I0531 19:02:38.248497       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0531 19:02:38.347662       1 shared_informer.go:318] Caches are synced for service config
	I0531 19:02:38.347735       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0531 19:02:38.348862       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7dd3be9739ad1e3009ec22de7c41785d110ba70939a81b7480288990abb8b378] <==
	* W0531 19:02:20.655209       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 19:02:20.655242       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 19:02:20.655278       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 19:02:20.655381       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 19:02:20.655401       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 19:02:20.655410       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 19:02:20.655384       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 19:02:20.655311       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 19:02:20.655437       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 19:02:20.655445       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 19:02:20.655336       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:02:20.655464       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 19:02:20.655298       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 19:02:20.655480       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 19:02:21.644383       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 19:02:21.644416       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 19:02:21.647730       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 19:02:21.647760       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 19:02:21.669114       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 19:02:21.669149       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 19:02:21.701687       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 19:02:21.701802       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 19:02:21.909313       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 19:02:21.909339       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 19:02:24.247627       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* May 31 19:02:37 multinode-697136 kubelet[1592]: I0531 19:02:37.240389    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmjp4\" (UniqueName: \"kubernetes.io/projected/5519ebaa-7169-4dbb-8a30-f179ad47d28b-kube-api-access-jmjp4\") pod \"kindnet-hgzvz\" (UID: \"5519ebaa-7169-4dbb-8a30-f179ad47d28b\") " pod="kube-system/kindnet-hgzvz"
	May 31 19:02:37 multinode-697136 kubelet[1592]: I0531 19:02:37.240487    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5519ebaa-7169-4dbb-8a30-f179ad47d28b-xtables-lock\") pod \"kindnet-hgzvz\" (UID: \"5519ebaa-7169-4dbb-8a30-f179ad47d28b\") " pod="kube-system/kindnet-hgzvz"
	May 31 19:02:37 multinode-697136 kubelet[1592]: I0531 19:02:37.240513    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5519ebaa-7169-4dbb-8a30-f179ad47d28b-lib-modules\") pod \"kindnet-hgzvz\" (UID: \"5519ebaa-7169-4dbb-8a30-f179ad47d28b\") " pod="kube-system/kindnet-hgzvz"
	May 31 19:02:37 multinode-697136 kubelet[1592]: I0531 19:02:37.240546    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47badf8b-17e5-49d3-bdde-743b58a05b7d-kube-proxy\") pod \"kube-proxy-tgk57\" (UID: \"47badf8b-17e5-49d3-bdde-743b58a05b7d\") " pod="kube-system/kube-proxy-tgk57"
	May 31 19:02:37 multinode-697136 kubelet[1592]: I0531 19:02:37.240604    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5519ebaa-7169-4dbb-8a30-f179ad47d28b-cni-cfg\") pod \"kindnet-hgzvz\" (UID: \"5519ebaa-7169-4dbb-8a30-f179ad47d28b\") " pod="kube-system/kindnet-hgzvz"
	May 31 19:02:37 multinode-697136 kubelet[1592]: I0531 19:02:37.240672    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47badf8b-17e5-49d3-bdde-743b58a05b7d-xtables-lock\") pod \"kube-proxy-tgk57\" (UID: \"47badf8b-17e5-49d3-bdde-743b58a05b7d\") " pod="kube-system/kube-proxy-tgk57"
	May 31 19:02:37 multinode-697136 kubelet[1592]: W0531 19:02:37.448882    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe/crio/crio-c2123667436162aa50567da56a3783b079957ac0d922ceb5f03f5742770054bb WatchSource:0}: Error finding container c2123667436162aa50567da56a3783b079957ac0d922ceb5f03f5742770054bb: Status 404 returned error can't find the container with id c2123667436162aa50567da56a3783b079957ac0d922ceb5f03f5742770054bb
	May 31 19:02:37 multinode-697136 kubelet[1592]: W0531 19:02:37.461511    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe/crio/crio-67a759a5ce8f80d9aa46d45d5b054200be854dfe22b350f891a1e31722f09fd0 WatchSource:0}: Error finding container 67a759a5ce8f80d9aa46d45d5b054200be854dfe22b350f891a1e31722f09fd0: Status 404 returned error can't find the container with id 67a759a5ce8f80d9aa46d45d5b054200be854dfe22b350f891a1e31722f09fd0
	May 31 19:02:38 multinode-697136 kubelet[1592]: I0531 19:02:38.451341    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tgk57" podStartSLOduration=1.451287877 podCreationTimestamp="2023-05-31 19:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-31 19:02:38.403087938 +0000 UTC m=+15.172367935" watchObservedRunningTime="2023-05-31 19:02:38.451287877 +0000 UTC m=+15.220567870"
	May 31 19:03:08 multinode-697136 kubelet[1592]: I0531 19:03:08.610909    1592 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	May 31 19:03:08 multinode-697136 kubelet[1592]: I0531 19:03:08.634489    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-hgzvz" podStartSLOduration=31.634434689 podCreationTimestamp="2023-05-31 19:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-31 19:02:38.451472615 +0000 UTC m=+15.220752610" watchObservedRunningTime="2023-05-31 19:03:08.634434689 +0000 UTC m=+45.403714687"
	May 31 19:03:08 multinode-697136 kubelet[1592]: I0531 19:03:08.634828    1592 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:03:08 multinode-697136 kubelet[1592]: I0531 19:03:08.636617    1592 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:03:08 multinode-697136 kubelet[1592]: I0531 19:03:08.683211    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a603b3b-cd36-4c4e-9c48-272ebf4323ee-config-volume\") pod \"coredns-5d78c9869d-fntsv\" (UID: \"3a603b3b-cd36-4c4e-9c48-272ebf4323ee\") " pod="kube-system/coredns-5d78c9869d-fntsv"
	May 31 19:03:08 multinode-697136 kubelet[1592]: I0531 19:03:08.683266    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fcd8fca8-4007-413a-bb66-be3052cea26f-tmp\") pod \"storage-provisioner\" (UID: \"fcd8fca8-4007-413a-bb66-be3052cea26f\") " pod="kube-system/storage-provisioner"
	May 31 19:03:08 multinode-697136 kubelet[1592]: I0531 19:03:08.683299    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7tmb\" (UniqueName: \"kubernetes.io/projected/3a603b3b-cd36-4c4e-9c48-272ebf4323ee-kube-api-access-h7tmb\") pod \"coredns-5d78c9869d-fntsv\" (UID: \"3a603b3b-cd36-4c4e-9c48-272ebf4323ee\") " pod="kube-system/coredns-5d78c9869d-fntsv"
	May 31 19:03:08 multinode-697136 kubelet[1592]: I0531 19:03:08.683328    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfkb2\" (UniqueName: \"kubernetes.io/projected/fcd8fca8-4007-413a-bb66-be3052cea26f-kube-api-access-tfkb2\") pod \"storage-provisioner\" (UID: \"fcd8fca8-4007-413a-bb66-be3052cea26f\") " pod="kube-system/storage-provisioner"
	May 31 19:03:08 multinode-697136 kubelet[1592]: W0531 19:03:08.955091    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe/crio/crio-eaa1d479904a178951f9febae3aea0b7c68c144bc64a25a00621347f800af1de WatchSource:0}: Error finding container eaa1d479904a178951f9febae3aea0b7c68c144bc64a25a00621347f800af1de: Status 404 returned error can't find the container with id eaa1d479904a178951f9febae3aea0b7c68c144bc64a25a00621347f800af1de
	May 31 19:03:08 multinode-697136 kubelet[1592]: W0531 19:03:08.985050    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe/crio/crio-f60ae27870d8d32f97860d3b5a485d43cb86a14d2f4e80936fdd3836ac4c3b33 WatchSource:0}: Error finding container f60ae27870d8d32f97860d3b5a485d43cb86a14d2f4e80936fdd3836ac4c3b33: Status 404 returned error can't find the container with id f60ae27870d8d32f97860d3b5a485d43cb86a14d2f4e80936fdd3836ac4c3b33
	May 31 19:03:09 multinode-697136 kubelet[1592]: I0531 19:03:09.454604    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-fntsv" podStartSLOduration=32.454562152 podCreationTimestamp="2023-05-31 19:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-31 19:03:09.454452628 +0000 UTC m=+46.223732626" watchObservedRunningTime="2023-05-31 19:03:09.454562152 +0000 UTC m=+46.223842151"
	May 31 19:03:09 multinode-697136 kubelet[1592]: I0531 19:03:09.463865    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.463820938 podCreationTimestamp="2023-05-31 19:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-31 19:03:09.463567395 +0000 UTC m=+46.232847393" watchObservedRunningTime="2023-05-31 19:03:09.463820938 +0000 UTC m=+46.233100936"
	May 31 19:03:29 multinode-697136 kubelet[1592]: I0531 19:03:29.773582    1592 topology_manager.go:212] "Topology Admit Handler"
	May 31 19:03:29 multinode-697136 kubelet[1592]: I0531 19:03:29.809871    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bg6b\" (UniqueName: \"kubernetes.io/projected/8e405caa-5cfb-423b-9f3f-bee6a4ccdd3f-kube-api-access-7bg6b\") pod \"busybox-67b7f59bb-jsm9c\" (UID: \"8e405caa-5cfb-423b-9f3f-bee6a4ccdd3f\") " pod="default/busybox-67b7f59bb-jsm9c"
	May 31 19:03:30 multinode-697136 kubelet[1592]: W0531 19:03:30.117480    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe/crio/crio-90bca19e380b6b724880b674f671d1144248d9df67bea014daf28ff9f4e12b01 WatchSource:0}: Error finding container 90bca19e380b6b724880b674f671d1144248d9df67bea014daf28ff9f4e12b01: Status 404 returned error can't find the container with id 90bca19e380b6b724880b674f671d1144248d9df67bea014daf28ff9f4e12b01
	May 31 19:03:31 multinode-697136 kubelet[1592]: I0531 19:03:31.496919    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-jsm9c" podStartSLOduration=1.954750177 podCreationTimestamp="2023-05-31 19:03:29 +0000 UTC" firstStartedPulling="2023-05-31 19:03:30.120543993 +0000 UTC m=+66.889823983" lastFinishedPulling="2023-05-31 19:03:30.662654584 +0000 UTC m=+67.431934574" observedRunningTime="2023-05-31 19:03:31.496621552 +0000 UTC m=+68.265901551" watchObservedRunningTime="2023-05-31 19:03:31.496860768 +0000 UTC m=+68.266140764"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-697136 -n multinode-697136
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-697136 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.01s)

                                                
                                    
x
+
TestPreload (149.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-575369 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0531 19:09:19.635268   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-575369 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m6.075472021s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-575369 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-575369
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-575369: (5.612418402s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-575369 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0531 19:10:50.615720   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-575369 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m12.334162092s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-575369 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	IMAGE               TAG                 IMAGE ID            SIZE

                                                
                                                
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-05-31 19:11:12.651716889 +0000 UTC m=+1659.910072342
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-575369
helpers_test.go:235: (dbg) docker inspect test-preload-575369:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "160325a16b6df5b51e731884bc961442487ea89d8f0560499c2824bf1820d9d4",
	        "Created": "2023-05-31T19:08:48.732635383Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 133995,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T19:10:06.666747666Z",
	            "FinishedAt": "2023-05-31T19:09:59.818886521Z"
	        },
	        "Image": "sha256:f246fffc476e503eec088cb85bddb7b217288054dd7e1375d4f95eca27f4bce3",
	        "ResolvConfPath": "/var/lib/docker/containers/160325a16b6df5b51e731884bc961442487ea89d8f0560499c2824bf1820d9d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/160325a16b6df5b51e731884bc961442487ea89d8f0560499c2824bf1820d9d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/160325a16b6df5b51e731884bc961442487ea89d8f0560499c2824bf1820d9d4/hosts",
	        "LogPath": "/var/lib/docker/containers/160325a16b6df5b51e731884bc961442487ea89d8f0560499c2824bf1820d9d4/160325a16b6df5b51e731884bc961442487ea89d8f0560499c2824bf1820d9d4-json.log",
	        "Name": "/test-preload-575369",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-575369:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "test-preload-575369",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e5204eb1d107ba2185b42a4a0439a173b50406ad13086adbc63aa2e654418163-init/diff:/var/lib/docker/overlay2/ff5bbba96769eca5d0c1a4ffdb04787b9f84aae4dcd4bc9929a365a3d058b20f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5204eb1d107ba2185b42a4a0439a173b50406ad13086adbc63aa2e654418163/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5204eb1d107ba2185b42a4a0439a173b50406ad13086adbc63aa2e654418163/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5204eb1d107ba2185b42a4a0439a173b50406ad13086adbc63aa2e654418163/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-575369",
	                "Source": "/var/lib/docker/volumes/test-preload-575369/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-575369",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-575369",
	                "name.minikube.sigs.k8s.io": "test-preload-575369",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e207dc0dd47c69f6c8b7e78ee48f30e3228026828d740306c14f857517762a0c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e207dc0dd47c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-575369": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "160325a16b6d",
	                        "test-preload-575369"
	                    ],
	                    "NetworkID": "a9af2e5dd07674767799d40d588c71549df7acc7ccab6a2eb8f71d3e3ada57cb",
	                    "EndpointID": "8c3882bda19675810dfc49418eb9f3fded74b245a90aa561551e8bd2b86636e2",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-575369 -n test-preload-575369
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-575369 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-575369 logs -n 25: (1.034704274s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-697136 ssh -n                                                                 | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:04 UTC | 31 May 23 19:04 UTC |
	|         | multinode-697136-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-697136 ssh -n multinode-697136 sudo cat                                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:04 UTC | 31 May 23 19:04 UTC |
	|         | /home/docker/cp-test_multinode-697136-m03_multinode-697136.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-697136 cp multinode-697136-m03:/home/docker/cp-test.txt                       | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:04 UTC | 31 May 23 19:04 UTC |
	|         | multinode-697136-m02:/home/docker/cp-test_multinode-697136-m03_multinode-697136-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-697136 ssh -n                                                                 | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:04 UTC | 31 May 23 19:04 UTC |
	|         | multinode-697136-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-697136 ssh -n multinode-697136-m02 sudo cat                                   | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:04 UTC | 31 May 23 19:04 UTC |
	|         | /home/docker/cp-test_multinode-697136-m03_multinode-697136-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-697136 node stop m03                                                          | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:04 UTC | 31 May 23 19:04 UTC |
	| node    | multinode-697136 node start                                                             | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:04 UTC | 31 May 23 19:04 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-697136                                                                | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:04 UTC |                     |
	| stop    | -p multinode-697136                                                                     | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:04 UTC | 31 May 23 19:05 UTC |
	| start   | -p multinode-697136                                                                     | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:05 UTC | 31 May 23 19:06 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-697136                                                                | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:06 UTC |                     |
	| node    | multinode-697136 node delete                                                            | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:06 UTC | 31 May 23 19:06 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-697136 stop                                                                   | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:06 UTC | 31 May 23 19:07 UTC |
	| start   | -p multinode-697136                                                                     | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:07 UTC | 31 May 23 19:08 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-697136                                                                | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:08 UTC |                     |
	| start   | -p multinode-697136-m02                                                                 | multinode-697136-m02 | jenkins | v1.30.1 | 31 May 23 19:08 UTC |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-697136-m03                                                                 | multinode-697136-m03 | jenkins | v1.30.1 | 31 May 23 19:08 UTC | 31 May 23 19:08 UTC |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-697136                                                                 | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:08 UTC |                     |
	| delete  | -p multinode-697136-m03                                                                 | multinode-697136-m03 | jenkins | v1.30.1 | 31 May 23 19:08 UTC | 31 May 23 19:08 UTC |
	| delete  | -p multinode-697136                                                                     | multinode-697136     | jenkins | v1.30.1 | 31 May 23 19:08 UTC | 31 May 23 19:08 UTC |
	| start   | -p test-preload-575369                                                                  | test-preload-575369  | jenkins | v1.30.1 | 31 May 23 19:08 UTC | 31 May 23 19:09 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --wait=true --preload=false                                                             |                      |         |         |                     |                     |
	|         | --driver=docker                                                                         |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-575369                                                                  | test-preload-575369  | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:09 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-575369                                                                  | test-preload-575369  | jenkins | v1.30.1 | 31 May 23 19:09 UTC | 31 May 23 19:10 UTC |
	| start   | -p test-preload-575369                                                                  | test-preload-575369  | jenkins | v1.30.1 | 31 May 23 19:10 UTC | 31 May 23 19:11 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=docker                                                             |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| ssh     | -p test-preload-575369 -- sudo                                                          | test-preload-575369  | jenkins | v1.30.1 | 31 May 23 19:11 UTC | 31 May 23 19:11 UTC |
	|         | crictl image ls                                                                         |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 19:10:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:10:00.098839  133708 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:10:00.099039  133708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:10:00.099052  133708 out.go:309] Setting ErrFile to fd 2...
	I0531 19:10:00.099059  133708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:10:00.099423  133708 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	I0531 19:10:00.100194  133708 out.go:303] Setting JSON to false
	I0531 19:10:00.101483  133708 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3149,"bootTime":1685557051,"procs":571,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 19:10:00.101548  133708 start.go:137] virtualization: kvm guest
	I0531 19:10:00.104868  133708 out.go:177] * [test-preload-575369] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 19:10:00.107003  133708 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:10:00.107055  133708 notify.go:220] Checking for updates...
	I0531 19:10:00.109134  133708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:10:00.111509  133708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:10:00.113647  133708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	I0531 19:10:00.115921  133708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 19:10:00.118288  133708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:10:00.121137  133708 config.go:182] Loaded profile config "test-preload-575369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0531 19:10:00.124108  133708 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0531 19:10:00.126287  133708 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:10:00.148757  133708 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:10:00.148855  133708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:10:00.198295  133708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:36 SystemTime:2023-05-31 19:10:00.189604109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 19:10:00.198392  133708 docker.go:294] overlay module found
	I0531 19:10:00.202278  133708 out.go:177] * Using the docker driver based on existing profile
	I0531 19:10:00.204075  133708 start.go:297] selected driver: docker
	I0531 19:10:00.204087  133708 start.go:875] validating driver "docker" against &{Name:test-preload-575369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-575369 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:10:00.204182  133708 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:10:00.204969  133708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:10:00.250834  133708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:36 SystemTime:2023-05-31 19:10:00.242568974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 19:10:00.251106  133708 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:10:00.251132  133708 cni.go:84] Creating CNI manager for ""
	I0531 19:10:00.251142  133708 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:10:00.251154  133708 start_flags.go:319] config:
	{Name:test-preload-575369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-575369 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:10:00.254814  133708 out.go:177] * Starting control plane node test-preload-575369 in cluster test-preload-575369
	I0531 19:10:00.256665  133708 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:10:00.258466  133708 out.go:177] * Pulling base image ...
	I0531 19:10:00.260076  133708 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0531 19:10:00.260103  133708 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 19:10:00.275968  133708 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 19:10:00.275989  133708 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	I0531 19:10:00.288972  133708 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0531 19:10:00.288998  133708 cache.go:57] Caching tarball of preloaded images
	I0531 19:10:00.289156  133708 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0531 19:10:00.291578  133708 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0531 19:10:00.293385  133708 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0531 19:10:00.320867  133708 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0531 19:10:05.472665  133708 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0531 19:10:05.472760  133708 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0531 19:10:06.355395  133708 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I0531 19:10:06.355550  133708 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/config.json ...
	I0531 19:10:06.355756  133708 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:10:06.355785  133708 start.go:364] acquiring machines lock for test-preload-575369: {Name:mk6428fe9f204250cdc89e77d1ac25d945debf7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:10:06.355867  133708 start.go:368] acquired machines lock for "test-preload-575369" in 44.799µs
	I0531 19:10:06.355885  133708 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:10:06.355890  133708 fix.go:55] fixHost starting: 
	I0531 19:10:06.356090  133708 cli_runner.go:164] Run: docker container inspect test-preload-575369 --format={{.State.Status}}
	I0531 19:10:06.371592  133708 fix.go:103] recreateIfNeeded on test-preload-575369: state=Stopped err=<nil>
	W0531 19:10:06.371637  133708 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 19:10:06.374383  133708 out.go:177] * Restarting existing docker container for "test-preload-575369" ...
	I0531 19:10:06.376393  133708 cli_runner.go:164] Run: docker start test-preload-575369
	I0531 19:10:06.674276  133708 cli_runner.go:164] Run: docker container inspect test-preload-575369 --format={{.State.Status}}
	I0531 19:10:06.690952  133708 kic.go:426] container "test-preload-575369" state is running.
	I0531 19:10:06.691460  133708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-575369
	I0531 19:10:06.707392  133708 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/config.json ...
	I0531 19:10:06.707609  133708 machine.go:88] provisioning docker machine ...
	I0531 19:10:06.707634  133708 ubuntu.go:169] provisioning hostname "test-preload-575369"
	I0531 19:10:06.707675  133708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-575369
	I0531 19:10:06.724115  133708 main.go:141] libmachine: Using SSH client type: native
	I0531 19:10:06.724832  133708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0531 19:10:06.724866  133708 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-575369 && echo "test-preload-575369" | sudo tee /etc/hostname
	I0531 19:10:06.725522  133708 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38010->127.0.0.1:32902: read: connection reset by peer
	I0531 19:10:09.854597  133708 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-575369
	
	I0531 19:10:09.854670  133708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-575369
	I0531 19:10:09.870985  133708 main.go:141] libmachine: Using SSH client type: native
	I0531 19:10:09.871430  133708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0531 19:10:09.871463  133708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-575369' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-575369/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-575369' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:10:09.980125  133708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:10:09.980154  133708 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-7270/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-7270/.minikube}
	I0531 19:10:09.980177  133708 ubuntu.go:177] setting up certificates
	I0531 19:10:09.980186  133708 provision.go:83] configureAuth start
	I0531 19:10:09.980232  133708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-575369
	I0531 19:10:09.996570  133708 provision.go:138] copyHostCerts
	I0531 19:10:09.996630  133708 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem, removing ...
	I0531 19:10:09.996644  133708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem
	I0531 19:10:09.996705  133708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem (1123 bytes)
	I0531 19:10:09.996796  133708 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem, removing ...
	I0531 19:10:09.996804  133708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem
	I0531 19:10:09.996827  133708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem (1675 bytes)
	I0531 19:10:09.996883  133708 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem, removing ...
	I0531 19:10:09.996890  133708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem
	I0531 19:10:09.996910  133708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem (1078 bytes)
	I0531 19:10:09.996954  133708 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem org=jenkins.test-preload-575369 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-575369]
	I0531 19:10:10.167219  133708 provision.go:172] copyRemoteCerts
	I0531 19:10:10.167278  133708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:10:10.167314  133708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-575369
	I0531 19:10:10.182932  133708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/test-preload-575369/id_rsa Username:docker}
	I0531 19:10:10.268444  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:10:10.288526  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0531 19:10:10.308197  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 19:10:10.327834  133708 provision.go:86] duration metric: configureAuth took 347.63591ms
	I0531 19:10:10.327862  133708 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:10:10.328006  133708 config.go:182] Loaded profile config "test-preload-575369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0531 19:10:10.328095  133708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-575369
	I0531 19:10:10.343898  133708 main.go:141] libmachine: Using SSH client type: native
	I0531 19:10:10.344330  133708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0531 19:10:10.344355  133708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:10:10.617122  133708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:10:10.617146  133708 machine.go:91] provisioned docker machine in 3.909523275s
	I0531 19:10:10.617158  133708 start.go:300] post-start starting for "test-preload-575369" (driver="docker")
	I0531 19:10:10.617167  133708 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:10:10.617231  133708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:10:10.617280  133708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-575369
	I0531 19:10:10.633465  133708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/test-preload-575369/id_rsa Username:docker}
	I0531 19:10:10.716498  133708 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:10:10.719495  133708 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:10:10.719522  133708 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:10:10.719530  133708 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:10:10.719536  133708 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0531 19:10:10.719544  133708 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/addons for local assets ...
	I0531 19:10:10.719627  133708 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/files for local assets ...
	I0531 19:10:10.719699  133708 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> 142322.pem in /etc/ssl/certs
	I0531 19:10:10.719777  133708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:10:10.726951  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem --> /etc/ssl/certs/142322.pem (1708 bytes)
	I0531 19:10:10.747272  133708 start.go:303] post-start completed in 130.10036ms
	I0531 19:10:10.747359  133708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:10:10.747397  133708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-575369
	I0531 19:10:10.763220  133708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/test-preload-575369/id_rsa Username:docker}
	I0531 19:10:10.844843  133708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:10:10.848720  133708 fix.go:57] fixHost completed within 4.492822849s
	I0531 19:10:10.848760  133708 start.go:83] releasing machines lock for "test-preload-575369", held for 4.492881602s
	I0531 19:10:10.848826  133708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-575369
	I0531 19:10:10.864916  133708 ssh_runner.go:195] Run: cat /version.json
	I0531 19:10:10.864964  133708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-575369
	I0531 19:10:10.864992  133708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:10:10.865051  133708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-575369
	I0531 19:10:10.881185  133708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/test-preload-575369/id_rsa Username:docker}
	I0531 19:10:10.881541  133708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/test-preload-575369/id_rsa Username:docker}
	I0531 19:10:11.047541  133708 ssh_runner.go:195] Run: systemctl --version
	I0531 19:10:11.051532  133708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:10:11.186874  133708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:10:11.190923  133708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:10:11.198608  133708 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 19:10:11.198670  133708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:10:11.206584  133708 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 19:10:11.206605  133708 start.go:481] detecting cgroup driver to use...
	I0531 19:10:11.206657  133708 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 19:10:11.206702  133708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:10:11.217366  133708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:10:11.227180  133708 docker.go:193] disabling cri-docker service (if available) ...
	I0531 19:10:11.227230  133708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:10:11.238151  133708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:10:11.247809  133708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:10:11.319834  133708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:10:11.387912  133708 docker.go:209] disabling docker service ...
	I0531 19:10:11.387977  133708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:10:11.398342  133708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:10:11.407468  133708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:10:11.476208  133708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:10:11.543371  133708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:10:11.553216  133708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:10:11.566870  133708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0531 19:10:11.566916  133708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:10:11.575107  133708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:10:11.575152  133708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:10:11.583453  133708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:10:11.591628  133708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:10:11.599604  133708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:10:11.607136  133708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:10:11.613911  133708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:10:11.620800  133708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:10:11.695337  133708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:10:11.808613  133708 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:10:11.808673  133708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:10:11.811912  133708 start.go:549] Will wait 60s for crictl version
	I0531 19:10:11.811969  133708 ssh_runner.go:195] Run: which crictl
	I0531 19:10:11.814865  133708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:10:11.846855  133708 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0531 19:10:11.846915  133708 ssh_runner.go:195] Run: crio --version
	I0531 19:10:11.878972  133708 ssh_runner.go:195] Run: crio --version
	I0531 19:10:11.914731  133708 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.5 ...
	I0531 19:10:11.916807  133708 cli_runner.go:164] Run: docker network inspect test-preload-575369 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0531 19:10:11.932857  133708 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0531 19:10:11.936161  133708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:10:11.946050  133708 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0531 19:10:11.946117  133708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:10:11.981344  133708 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 19:10:11.981362  133708 crio.go:415] Images already preloaded, skipping extraction
	I0531 19:10:11.981401  133708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:10:12.012207  133708 crio.go:496] all images are preloaded for cri-o runtime.
	I0531 19:10:12.012231  133708 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:10:12.012313  133708 ssh_runner.go:195] Run: crio config
	I0531 19:10:12.052268  133708 cni.go:84] Creating CNI manager for ""
	I0531 19:10:12.052309  133708 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:10:12.052322  133708 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0531 19:10:12.052348  133708 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-575369 NodeName:test-preload-575369 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:10:12.052515  133708 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-575369"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:10:12.052602  133708 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=test-preload-575369 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-575369 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0531 19:10:12.052667  133708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0531 19:10:12.060507  133708 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:10:12.060565  133708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:10:12.067884  133708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0531 19:10:12.082958  133708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:10:12.097780  133708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0531 19:10:12.112102  133708 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0531 19:10:12.114972  133708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:10:12.124110  133708 certs.go:56] Setting up /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369 for IP: 192.168.67.2
	I0531 19:10:12.124135  133708 certs.go:190] acquiring lock for shared ca certs: {Name:mkbc42e9eaddef0752bd9f3cb948d1ed478bdf0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:10:12.124259  133708 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key
	I0531 19:10:12.124337  133708 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key
	I0531 19:10:12.124408  133708 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/client.key
	I0531 19:10:12.124458  133708 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/apiserver.key.c7fa3a9e
	I0531 19:10:12.124497  133708 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/proxy-client.key
	I0531 19:10:12.124608  133708 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232.pem (1338 bytes)
	W0531 19:10:12.124639  133708 certs.go:433] ignoring /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232_empty.pem, impossibly tiny 0 bytes
	I0531 19:10:12.124648  133708 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem (1679 bytes)
	I0531 19:10:12.124669  133708 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem (1078 bytes)
	I0531 19:10:12.124698  133708 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:10:12.124720  133708 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem (1675 bytes)
	I0531 19:10:12.124757  133708 certs.go:437] found cert: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem (1708 bytes)
	I0531 19:10:12.125375  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0531 19:10:12.145508  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 19:10:12.165072  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:10:12.185038  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 19:10:12.204450  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:10:12.223843  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:10:12.244236  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:10:12.264505  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 19:10:12.284153  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:10:12.304064  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/certs/14232.pem --> /usr/share/ca-certificates/14232.pem (1338 bytes)
	I0531 19:10:12.323903  133708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem --> /usr/share/ca-certificates/142322.pem (1708 bytes)
	I0531 19:10:12.343904  133708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:10:12.358635  133708 ssh_runner.go:195] Run: openssl version
	I0531 19:10:12.363233  133708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:10:12.371094  133708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:10:12.374054  133708 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 31 18:44 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:10:12.374107  133708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:10:12.380543  133708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:10:12.387876  133708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14232.pem && ln -fs /usr/share/ca-certificates/14232.pem /etc/ssl/certs/14232.pem"
	I0531 19:10:12.395667  133708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14232.pem
	I0531 19:10:12.398476  133708 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 31 18:49 /usr/share/ca-certificates/14232.pem
	I0531 19:10:12.398512  133708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14232.pem
	I0531 19:10:12.404423  133708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14232.pem /etc/ssl/certs/51391683.0"
	I0531 19:10:12.411753  133708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142322.pem && ln -fs /usr/share/ca-certificates/142322.pem /etc/ssl/certs/142322.pem"
	I0531 19:10:12.419770  133708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142322.pem
	I0531 19:10:12.422917  133708 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 31 18:49 /usr/share/ca-certificates/142322.pem
	I0531 19:10:12.422964  133708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142322.pem
	I0531 19:10:12.429062  133708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142322.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:10:12.436787  133708 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0531 19:10:12.439847  133708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 19:10:12.445962  133708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 19:10:12.451986  133708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 19:10:12.457774  133708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 19:10:12.463447  133708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 19:10:12.469058  133708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 19:10:12.474832  133708 kubeadm.go:404] StartCluster: {Name:test-preload-575369 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-575369 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:10:12.474924  133708 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:10:12.474956  133708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:10:12.506139  133708 cri.go:88] found id: ""
	I0531 19:10:12.506214  133708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:10:12.514047  133708 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0531 19:10:12.514070  133708 kubeadm.go:636] restartCluster start
	I0531 19:10:12.514119  133708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0531 19:10:12.521576  133708 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:12.521974  133708 kubeconfig.go:135] verify returned: extract IP: "test-preload-575369" does not appear in /home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:10:12.522084  133708 kubeconfig.go:146] "test-preload-575369" context is missing from /home/jenkins/minikube-integration/16569-7270/kubeconfig - will repair!
	I0531 19:10:12.522386  133708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/kubeconfig: {Name:mk2e9ef864ed1e4aaf9a6e1bd97970840e57fe82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:10:12.522990  133708 kapi.go:59] client config for test-preload-575369: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/client.key", CAFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b95a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:10:12.523848  133708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0531 19:10:12.531187  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:12.531234  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:12.540131  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:13.040848  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:13.040951  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:13.050508  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:13.541221  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:13.541311  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:13.550670  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:14.040366  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:14.040478  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:14.049893  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:14.540449  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:14.540550  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:14.550375  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:15.041018  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:15.041186  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:15.050884  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:15.540537  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:15.540642  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:15.550456  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:16.041118  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:16.041223  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:16.050958  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:16.540538  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:16.540631  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:16.550205  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:17.040720  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:17.040810  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:17.050253  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:17.540877  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:17.540954  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:17.550469  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:18.041100  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:18.041205  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:18.050798  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:18.540370  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:18.540474  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:18.549989  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:19.040570  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:19.040656  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:19.050206  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:19.540920  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:19.541010  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:19.550794  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:20.040349  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:20.040442  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:20.050294  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:20.541203  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:20.541299  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:20.550685  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:21.040227  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:21.040350  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:21.049778  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:21.540360  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:21.540466  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:21.550156  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:22.040731  133708 api_server.go:166] Checking apiserver status ...
	I0531 19:10:22.040818  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0531 19:10:22.050435  133708 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:22.532313  133708 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0531 19:10:22.532365  133708 kubeadm.go:1123] stopping kube-system containers ...
	I0531 19:10:22.532381  133708 cri.go:53] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0531 19:10:22.532473  133708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:10:22.565426  133708 cri.go:88] found id: ""
	I0531 19:10:22.565478  133708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0531 19:10:22.575377  133708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:10:22.583011  133708 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 31 19:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 31 19:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 May 31 19:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 May 31 19:09 /etc/kubernetes/scheduler.conf
	
	I0531 19:10:22.583092  133708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 19:10:22.590494  133708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 19:10:22.597740  133708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 19:10:22.604828  133708 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:22.604872  133708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 19:10:22.611946  133708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 19:10:22.619801  133708 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:10:22.619862  133708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 19:10:22.627488  133708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:10:22.635319  133708 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0531 19:10:22.635341  133708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:10:22.681360  133708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:10:23.398113  133708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:10:23.578957  133708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:10:23.644077  133708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:10:23.753780  133708 api_server.go:52] waiting for apiserver process to appear ...
	I0531 19:10:23.753837  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:10:24.264107  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:10:24.764442  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:10:24.774683  133708 api_server.go:72] duration metric: took 1.020894614s to wait for apiserver process to appear ...
	I0531 19:10:24.774708  133708 api_server.go:88] waiting for apiserver healthz status ...
	I0531 19:10:24.774726  133708 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 19:10:28.147403  133708 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0531 19:10:28.147503  133708 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0531 19:10:28.648284  133708 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 19:10:28.653213  133708 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0531 19:10:28.653243  133708 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0531 19:10:29.148371  133708 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 19:10:29.153859  133708 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0531 19:10:29.153897  133708 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0531 19:10:29.648448  133708 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 19:10:29.654294  133708 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0531 19:10:29.664401  133708 api_server.go:141] control plane version: v1.24.4
	I0531 19:10:29.664440  133708 api_server.go:131] duration metric: took 4.889716965s to wait for apiserver health ...
	I0531 19:10:29.664452  133708 cni.go:84] Creating CNI manager for ""
	I0531 19:10:29.664460  133708 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 19:10:29.667572  133708 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 19:10:29.669534  133708 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 19:10:29.673467  133708 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.24.4/kubectl ...
	I0531 19:10:29.673515  133708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 19:10:29.692413  133708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 19:10:30.675804  133708 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:10:30.682601  133708 system_pods.go:59] 8 kube-system pods found
	I0531 19:10:30.682635  133708 system_pods.go:61] "coredns-6d4b75cb6d-bwhc5" [42521aa1-c950-4042-a0bf-cc5307f70549] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0531 19:10:30.682643  133708 system_pods.go:61] "etcd-test-preload-575369" [c99d1d5e-b16b-4036-8b68-40ef1d4f7c30] Running
	I0531 19:10:30.682647  133708 system_pods.go:61] "kindnet-788mf" [f485b948-a1c4-4d16-97b0-2c878b1d0c19] Running
	I0531 19:10:30.682652  133708 system_pods.go:61] "kube-apiserver-test-preload-575369" [0b223529-4577-4868-89fb-4119ff784789] Running
	I0531 19:10:30.682658  133708 system_pods.go:61] "kube-controller-manager-test-preload-575369" [e538d49c-b4b5-48c9-8014-a52d3540a9a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0531 19:10:30.682669  133708 system_pods.go:61] "kube-proxy-bzwwc" [e56374b3-4de5-4172-9350-2eb0ebedb824] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0531 19:10:30.682675  133708 system_pods.go:61] "kube-scheduler-test-preload-575369" [19c0b339-4af7-41ac-a453-ecf5bab38b70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0531 19:10:30.682685  133708 system_pods.go:61] "storage-provisioner" [409b11ce-c83a-495e-9760-5e263f8a9102] Running
	I0531 19:10:30.682693  133708 system_pods.go:74] duration metric: took 6.869395ms to wait for pod list to return data ...
	I0531 19:10:30.682701  133708 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:10:30.684998  133708 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0531 19:10:30.685020  133708 node_conditions.go:123] node cpu capacity is 8
	I0531 19:10:30.685033  133708 node_conditions.go:105] duration metric: took 2.324232ms to run NodePressure ...
	I0531 19:10:30.685047  133708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0531 19:10:30.812878  133708 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0531 19:10:30.816540  133708 kubeadm.go:787] kubelet initialised
	I0531 19:10:30.816561  133708 kubeadm.go:788] duration metric: took 3.663488ms waiting for restarted kubelet to initialise ...
	I0531 19:10:30.816568  133708 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:10:30.821601  133708 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace to be "Ready" ...
	I0531 19:10:32.832091  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:10:34.832137  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:10:36.832189  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:10:38.832596  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:10:41.332083  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:10:43.832628  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:10:46.331825  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:10:48.332741  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:10:50.833318  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:10:53.332499  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:10:55.831731  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:10:57.832873  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:11:00.332722  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:11:02.832593  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:11:05.332656  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:11:07.831859  133708 pod_ready.go:102] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"False"
	I0531 19:11:08.332150  133708 pod_ready.go:92] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:08.332176  133708 pod_ready.go:81] duration metric: took 37.510549737s waiting for pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:08.332186  133708 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:08.336220  133708 pod_ready.go:92] pod "etcd-test-preload-575369" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:08.336243  133708 pod_ready.go:81] duration metric: took 4.049641ms waiting for pod "etcd-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:08.336254  133708 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:08.340534  133708 pod_ready.go:92] pod "kube-apiserver-test-preload-575369" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:08.340553  133708 pod_ready.go:81] duration metric: took 4.294352ms waiting for pod "kube-apiserver-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:08.340563  133708 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:08.344571  133708 pod_ready.go:92] pod "kube-controller-manager-test-preload-575369" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:08.344592  133708 pod_ready.go:81] duration metric: took 4.02305ms waiting for pod "kube-controller-manager-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:08.344600  133708 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bzwwc" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:08.348184  133708 pod_ready.go:92] pod "kube-proxy-bzwwc" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:08.348200  133708 pod_ready.go:81] duration metric: took 3.59313ms waiting for pod "kube-proxy-bzwwc" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:08.348207  133708 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:08.730834  133708 pod_ready.go:92] pod "kube-scheduler-test-preload-575369" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:08.730859  133708 pod_ready.go:81] duration metric: took 382.646175ms waiting for pod "kube-scheduler-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:08.730869  133708 pod_ready.go:38] duration metric: took 37.914293419s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:11:08.730886  133708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 19:11:08.738379  133708 ops.go:34] apiserver oom_adj: -16
	I0531 19:11:08.738396  133708 kubeadm.go:640] restartCluster took 56.224320416s
	I0531 19:11:08.738404  133708 kubeadm.go:406] StartCluster complete in 56.26357769s
	I0531 19:11:08.738424  133708 settings.go:142] acquiring lock: {Name:mk168872ecacf1e04453fffdd7073a8caed6462b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:11:08.738491  133708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:11:08.739122  133708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16569-7270/kubeconfig: {Name:mk2e9ef864ed1e4aaf9a6e1bd97970840e57fe82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:11:08.739406  133708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 19:11:08.739529  133708 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0531 19:11:08.739619  133708 config.go:182] Loaded profile config "test-preload-575369": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0531 19:11:08.739661  133708 addons.go:66] Setting storage-provisioner=true in profile "test-preload-575369"
	I0531 19:11:08.739687  133708 addons.go:66] Setting default-storageclass=true in profile "test-preload-575369"
	I0531 19:11:08.739698  133708 addons.go:228] Setting addon storage-provisioner=true in "test-preload-575369"
	W0531 19:11:08.739717  133708 addons.go:237] addon storage-provisioner should already be in state true
	I0531 19:11:08.739720  133708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-575369"
	I0531 19:11:08.739807  133708 host.go:66] Checking if "test-preload-575369" exists ...
	I0531 19:11:08.739992  133708 kapi.go:59] client config for test-preload-575369: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/client.key", CAFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b95a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:11:08.740065  133708 cli_runner.go:164] Run: docker container inspect test-preload-575369 --format={{.State.Status}}
	I0531 19:11:08.740256  133708 cli_runner.go:164] Run: docker container inspect test-preload-575369 --format={{.State.Status}}
	I0531 19:11:08.743310  133708 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-575369" context rescaled to 1 replicas
	I0531 19:11:08.743352  133708 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 19:11:08.745592  133708 out.go:177] * Verifying Kubernetes components...
	I0531 19:11:08.747236  133708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:11:08.758475  133708 kapi.go:59] client config for test-preload-575369: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/client.crt", KeyFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/profiles/test-preload-575369/client.key", CAFile:"/home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b95a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 19:11:08.763151  133708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:11:08.764854  133708 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:11:08.764873  133708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 19:11:08.764924  133708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-575369
	I0531 19:11:08.766094  133708 addons.go:228] Setting addon default-storageclass=true in "test-preload-575369"
	W0531 19:11:08.766114  133708 addons.go:237] addon default-storageclass should already be in state true
	I0531 19:11:08.766165  133708 host.go:66] Checking if "test-preload-575369" exists ...
	I0531 19:11:08.766664  133708 cli_runner.go:164] Run: docker container inspect test-preload-575369 --format={{.State.Status}}
	I0531 19:11:08.784958  133708 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 19:11:08.784980  133708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 19:11:08.785032  133708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-575369
	I0531 19:11:08.785763  133708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/test-preload-575369/id_rsa Username:docker}
	I0531 19:11:08.805197  133708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/test-preload-575369/id_rsa Username:docker}
	I0531 19:11:08.813249  133708 node_ready.go:35] waiting up to 6m0s for node "test-preload-575369" to be "Ready" ...
	I0531 19:11:08.813299  133708 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0531 19:11:08.885573  133708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 19:11:08.901455  133708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 19:11:08.930650  133708 node_ready.go:49] node "test-preload-575369" has status "Ready":"True"
	I0531 19:11:08.930672  133708 node_ready.go:38] duration metric: took 117.395002ms waiting for node "test-preload-575369" to be "Ready" ...
	I0531 19:11:08.930680  133708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:11:09.094923  133708 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 19:11:09.096710  133708 addons.go:499] enable addons completed in 357.184383ms: enabled=[storage-provisioner default-storageclass]
	I0531 19:11:09.133121  133708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:09.530406  133708 pod_ready.go:92] pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:09.530427  133708 pod_ready.go:81] duration metric: took 397.28191ms waiting for pod "coredns-6d4b75cb6d-bwhc5" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:09.530437  133708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:09.931216  133708 pod_ready.go:92] pod "etcd-test-preload-575369" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:09.931237  133708 pod_ready.go:81] duration metric: took 400.794905ms waiting for pod "etcd-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:09.931254  133708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:10.330480  133708 pod_ready.go:92] pod "kube-apiserver-test-preload-575369" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:10.330502  133708 pod_ready.go:81] duration metric: took 399.242676ms waiting for pod "kube-apiserver-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:10.330511  133708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:10.730385  133708 pod_ready.go:92] pod "kube-controller-manager-test-preload-575369" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:10.730416  133708 pod_ready.go:81] duration metric: took 399.896734ms waiting for pod "kube-controller-manager-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:10.730430  133708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzwwc" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:11.130082  133708 pod_ready.go:92] pod "kube-proxy-bzwwc" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:11.130106  133708 pod_ready.go:81] duration metric: took 399.668332ms waiting for pod "kube-proxy-bzwwc" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:11.130118  133708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:11.530750  133708 pod_ready.go:92] pod "kube-scheduler-test-preload-575369" in "kube-system" namespace has status "Ready":"True"
	I0531 19:11:11.530776  133708 pod_ready.go:81] duration metric: took 400.650181ms waiting for pod "kube-scheduler-test-preload-575369" in "kube-system" namespace to be "Ready" ...
	I0531 19:11:11.530790  133708 pod_ready.go:38] duration metric: took 2.600101808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 19:11:11.530807  133708 api_server.go:52] waiting for apiserver process to appear ...
	I0531 19:11:11.530868  133708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:11:11.540827  133708 api_server.go:72] duration metric: took 2.797444517s to wait for apiserver process to appear ...
	I0531 19:11:11.540850  133708 api_server.go:88] waiting for apiserver healthz status ...
	I0531 19:11:11.540865  133708 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0531 19:11:11.545822  133708 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0531 19:11:11.546593  133708 api_server.go:141] control plane version: v1.24.4
	I0531 19:11:11.546612  133708 api_server.go:131] duration metric: took 5.756987ms to wait for apiserver health ...
	I0531 19:11:11.546619  133708 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 19:11:11.733765  133708 system_pods.go:59] 8 kube-system pods found
	I0531 19:11:11.733795  133708 system_pods.go:61] "coredns-6d4b75cb6d-bwhc5" [42521aa1-c950-4042-a0bf-cc5307f70549] Running
	I0531 19:11:11.733800  133708 system_pods.go:61] "etcd-test-preload-575369" [c99d1d5e-b16b-4036-8b68-40ef1d4f7c30] Running
	I0531 19:11:11.733804  133708 system_pods.go:61] "kindnet-788mf" [f485b948-a1c4-4d16-97b0-2c878b1d0c19] Running
	I0531 19:11:11.733808  133708 system_pods.go:61] "kube-apiserver-test-preload-575369" [0b223529-4577-4868-89fb-4119ff784789] Running
	I0531 19:11:11.733812  133708 system_pods.go:61] "kube-controller-manager-test-preload-575369" [e538d49c-b4b5-48c9-8014-a52d3540a9a1] Running
	I0531 19:11:11.733816  133708 system_pods.go:61] "kube-proxy-bzwwc" [e56374b3-4de5-4172-9350-2eb0ebedb824] Running
	I0531 19:11:11.733820  133708 system_pods.go:61] "kube-scheduler-test-preload-575369" [19c0b339-4af7-41ac-a453-ecf5bab38b70] Running
	I0531 19:11:11.733824  133708 system_pods.go:61] "storage-provisioner" [409b11ce-c83a-495e-9760-5e263f8a9102] Running
	I0531 19:11:11.733828  133708 system_pods.go:74] duration metric: took 187.204781ms to wait for pod list to return data ...
	I0531 19:11:11.733836  133708 default_sa.go:34] waiting for default service account to be created ...
	I0531 19:11:11.929784  133708 default_sa.go:45] found service account: "default"
	I0531 19:11:11.929807  133708 default_sa.go:55] duration metric: took 195.966597ms for default service account to be created ...
	I0531 19:11:11.929816  133708 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 19:11:12.132267  133708 system_pods.go:86] 8 kube-system pods found
	I0531 19:11:12.132321  133708 system_pods.go:89] "coredns-6d4b75cb6d-bwhc5" [42521aa1-c950-4042-a0bf-cc5307f70549] Running
	I0531 19:11:12.132330  133708 system_pods.go:89] "etcd-test-preload-575369" [c99d1d5e-b16b-4036-8b68-40ef1d4f7c30] Running
	I0531 19:11:12.132336  133708 system_pods.go:89] "kindnet-788mf" [f485b948-a1c4-4d16-97b0-2c878b1d0c19] Running
	I0531 19:11:12.132342  133708 system_pods.go:89] "kube-apiserver-test-preload-575369" [0b223529-4577-4868-89fb-4119ff784789] Running
	I0531 19:11:12.132353  133708 system_pods.go:89] "kube-controller-manager-test-preload-575369" [e538d49c-b4b5-48c9-8014-a52d3540a9a1] Running
	I0531 19:11:12.132359  133708 system_pods.go:89] "kube-proxy-bzwwc" [e56374b3-4de5-4172-9350-2eb0ebedb824] Running
	I0531 19:11:12.132365  133708 system_pods.go:89] "kube-scheduler-test-preload-575369" [19c0b339-4af7-41ac-a453-ecf5bab38b70] Running
	I0531 19:11:12.132371  133708 system_pods.go:89] "storage-provisioner" [409b11ce-c83a-495e-9760-5e263f8a9102] Running
	I0531 19:11:12.132380  133708 system_pods.go:126] duration metric: took 202.55839ms to wait for k8s-apps to be running ...
	I0531 19:11:12.132388  133708 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 19:11:12.132442  133708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:11:12.143024  133708 system_svc.go:56] duration metric: took 10.626108ms WaitForService to wait for kubelet.
	I0531 19:11:12.143048  133708 kubeadm.go:581] duration metric: took 3.399668448s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0531 19:11:12.143063  133708 node_conditions.go:102] verifying NodePressure condition ...
	I0531 19:11:12.330252  133708 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0531 19:11:12.330275  133708 node_conditions.go:123] node cpu capacity is 8
	I0531 19:11:12.330286  133708 node_conditions.go:105] duration metric: took 187.21823ms to run NodePressure ...
	I0531 19:11:12.330296  133708 start.go:228] waiting for startup goroutines ...
	I0531 19:11:12.330301  133708 start.go:233] waiting for cluster config update ...
	I0531 19:11:12.330311  133708 start.go:242] writing updated cluster config ...
	I0531 19:11:12.330575  133708 ssh_runner.go:195] Run: rm -f paused
	I0531 19:11:12.373516  133708 start.go:573] kubectl: 1.27.2, cluster: 1.24.4 (minor skew: 3)
	I0531 19:11:12.376145  133708 out.go:177] 
	W0531 19:11:12.377965  133708 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0531 19:11:12.380113  133708 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0531 19:11:12.382148  133708 out.go:177] * Done! kubectl is now configured to use "test-preload-575369" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.227804849Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651],Size_:31468661,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c4ec9e50-7d72-4b58-9063-c93348bd5705 name=/runtime.v1.ImageService/ImageStatus
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.228449215Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=920aeebc-5c4c-4d18-ab6c-bf2f99f87f40 name=/runtime.v1.ImageService/ImageStatus
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.228600880Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651],Size_:31468661,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=920aeebc-5c4c-4d18-ab6c-bf2f99f87f40 name=/runtime.v1.ImageService/ImageStatus
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.229219092Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=afce89a5-f25f-405f-8379-fe38d53259fd name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.229309252Z" level=warning msg="Allowed annotations are specified for workload []"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.240510893Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/07b53bbdd5088e4aad4457772fd7965c080f741c369ef2be43ab3b6b4bd9afcd/merged/etc/passwd: no such file or directory"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.240548278Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/07b53bbdd5088e4aad4457772fd7965c080f741c369ef2be43ab3b6b4bd9afcd/merged/etc/group: no such file or directory"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.374384203Z" level=info msg="Created container 61e4cb2d8fd619d1565797633eca687e6902c3733dfc989ae36bc530967fa48e: kube-system/storage-provisioner/storage-provisioner" id=afce89a5-f25f-405f-8379-fe38d53259fd name=/runtime.v1.RuntimeService/CreateContainer
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.374921557Z" level=info msg="Starting container: 61e4cb2d8fd619d1565797633eca687e6902c3733dfc989ae36bc530967fa48e" id=137c19ea-e49d-40cb-b11c-dfa041464222 name=/runtime.v1.RuntimeService/StartContainer
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.382285085Z" level=info msg="Started container" PID=1635 containerID=61e4cb2d8fd619d1565797633eca687e6902c3733dfc989ae36bc530967fa48e description=kube-system/storage-provisioner/storage-provisioner id=137c19ea-e49d-40cb-b11c-dfa041464222 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f59163bd13f02ec021df921e0a508faba6a111ecca73b8b5800ca9bc24b7fbcb
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.454413852Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.458363500Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.458400985Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.458422050Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.462086683Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.462117043Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.462129453Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.465480411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.465511570Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.544099364Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.548106942Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.548137096Z" level=info msg="Updated default CNI network name to kindnet"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.548154518Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.551807947Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	May 31 19:10:30 test-preload-575369 crio[633]: time="2023-05-31 19:10:30.551843904Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	61e4cb2d8fd61       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   43 seconds ago      Running             storage-provisioner       1                   f59163bd13f02       storage-provisioner
	29c3bf859c4fa       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   43 seconds ago      Running             kindnet-cni               1                   815d38fd4cf9e       kindnet-788mf
	d17a401f170ea       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   43 seconds ago      Running             coredns                   1                   94c6669b29aeb       coredns-6d4b75cb6d-bwhc5
	863b15a8df41c       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   43 seconds ago      Running             kube-proxy                1                   c0e15a6aebd2a       kube-proxy-bzwwc
	86ecab35e01fc       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   49 seconds ago      Running             etcd                      1                   5800e5b902cc1       etcd-test-preload-575369
	201dc85e412dc       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   49 seconds ago      Running             kube-scheduler            1                   7ea5845dc0925       kube-scheduler-test-preload-575369
	337c4b043f7f7       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   49 seconds ago      Running             kube-controller-manager   1                   0e2bcbc819e16       kube-controller-manager-test-preload-575369
	f63606bf9b9aa       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   49 seconds ago      Running             kube-apiserver            1                   72ac239a8d0d6       kube-apiserver-test-preload-575369
	
	* 
	* ==> coredns [d17a401f170eae9a48d7ab848d8d6467f79e52957d5098e72507aae51ca7bb46] <==
	* [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c452237b08d4ce46c54c803341046308
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:52703 - 27248 "HINFO IN 8829992863648679407.5605046899952184922. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056099651s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-575369
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-575369
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7022875d4a054c2d518e5e5a7b9d500799d50140
	                    minikube.k8s.io/name=test-preload-575369
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_31T19_09_30_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 May 2023 19:09:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-575369
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 May 2023 19:11:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 May 2023 19:10:28 +0000   Wed, 31 May 2023 19:09:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 May 2023 19:10:28 +0000   Wed, 31 May 2023 19:09:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 May 2023 19:10:28 +0000   Wed, 31 May 2023 19:09:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 May 2023 19:10:28 +0000   Wed, 31 May 2023 19:09:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    test-preload-575369
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	System Info:
	  Machine ID:                 99af3e3cb795451cbd7e9caa7edd8e6c
	  System UUID:                4044f6d9-cca0-44ae-b1fb-048ea1f2fd08
	  Boot ID:                    858e553b-6392-44c5-a611-8f56a2b0fab6
	  Kernel Version:             5.15.0-1035-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-bwhc5                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     91s
	  kube-system                 etcd-test-preload-575369                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kindnet-788mf                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      91s
	  kube-system                 kube-apiserver-test-preload-575369             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-controller-manager-test-preload-575369    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-bzwwc                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-test-preload-575369             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 89s                kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  Starting                 104s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s               kubelet          Node test-preload-575369 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s               kubelet          Node test-preload-575369 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s               kubelet          Node test-preload-575369 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           92s                node-controller  Node test-preload-575369 event: Registered Node test-preload-575369 in Controller
	  Normal  NodeReady                83s                kubelet          Node test-preload-575369 status is now: NodeReady
	  Normal  Starting                 50s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node test-preload-575369 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node test-preload-575369 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x8 over 50s)  kubelet          Node test-preload-575369 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                node-controller  Node test-preload-575369 event: Registered Node test-preload-575369 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: 02 42 20 81 09 81 02 42 c0 a8 3a 02 08 00
	[  +4.035715] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-e74e442a15eb
	[  +0.000006] ll header: 00000000: 02 42 20 81 09 81 02 42 c0 a8 3a 02 08 00
	[  +8.191347] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-e74e442a15eb
	[  +0.000006] ll header: 00000000: 02 42 20 81 09 81 02 42 c0 a8 3a 02 08 00
	[May31 19:07] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e74e442a15eb
	[  +0.000008] ll header: 00000000: 02 42 20 81 09 81 02 42 c0 a8 3a 02 08 00
	[  +1.029100] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e74e442a15eb
	[  +0.000023] ll header: 00000000: 02 42 20 81 09 81 02 42 c0 a8 3a 02 08 00
	[  +2.015801] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e74e442a15eb
	[  +0.000006] ll header: 00000000: 02 42 20 81 09 81 02 42 c0 a8 3a 02 08 00
	[  +4.127719] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e74e442a15eb
	[  +0.000026] ll header: 00000000: 02 42 20 81 09 81 02 42 c0 a8 3a 02 08 00
	[  +8.191377] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-e74e442a15eb
	[  +0.000005] ll header: 00000000: 02 42 20 81 09 81 02 42 c0 a8 3a 02 08 00
	[May31 19:10] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a9af2e5dd076
	[  +0.000006] ll header: 00000000: 02 42 62 41 62 0f 02 42 c0 a8 43 02 08 00
	[  +1.005533] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a9af2e5dd076
	[  +0.000007] ll header: 00000000: 02 42 62 41 62 0f 02 42 c0 a8 43 02 08 00
	[  +2.011854] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a9af2e5dd076
	[  +0.000028] ll header: 00000000: 02 42 62 41 62 0f 02 42 c0 a8 43 02 08 00
	[  +4.127654] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a9af2e5dd076
	[  +0.000023] ll header: 00000000: 02 42 62 41 62 0f 02 42 c0 a8 43 02 08 00
	[  +8.191414] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a9af2e5dd076
	[  +0.000024] ll header: 00000000: 02 42 62 41 62 0f 02 42 c0 a8 43 02 08 00
	
	* 
	* ==> etcd [86ecab35e01fcf3e81d150368f95cfdcc5b63535220f2ac78324a315e9d8ccb7] <==
	* {"level":"info","ts":"2023-05-31T19:10:24.563Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-05-31T19:10:24.563Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-05-31T19:10:24.564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-05-31T19:10:24.564Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-05-31T19:10:24.564Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:10:24.564Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-31T19:10:24.566Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-31T19:10:24.566Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-05-31T19:10:24.566Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-05-31T19:10:24.566Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-31T19:10:24.567Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-31T19:10:26.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-05-31T19:10:26.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-05-31T19:10:26.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-05-31T19:10:26.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-05-31T19:10:26.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-05-31T19:10:26.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-05-31T19:10:26.255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-05-31T19:10:26.256Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:test-preload-575369 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-31T19:10:26.256Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:10:26.256Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-31T19:10:26.257Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-31T19:10:26.257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-31T19:10:26.258Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-05-31T19:10:26.258Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  19:11:13 up 53 min,  0 users,  load average: 0.50, 0.77, 0.73
	Linux test-preload-575369 5.15.0-1035-gcp #43~20.04.1-Ubuntu SMP Mon May 22 16:49:11 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [29c3bf859c4fa42e16af73d0829134a5546d0f126b1dd79f74e31f808447121a] <==
	* I0531 19:10:30.145710       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 19:10:30.145757       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0531 19:10:30.145862       1 main.go:116] setting mtu 1500 for CNI 
	I0531 19:10:30.145873       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 19:10:30.145894       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0531 19:10:30.454160       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0531 19:10:30.454193       1 main.go:227] handling current node
	I0531 19:10:40.555747       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0531 19:10:40.555770       1 main.go:227] handling current node
	I0531 19:10:50.571260       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0531 19:10:50.571284       1 main.go:227] handling current node
	I0531 19:11:00.576146       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0531 19:11:00.576172       1 main.go:227] handling current node
	I0531 19:11:10.588668       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0531 19:11:10.588695       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [f63606bf9b9aab7ce80d0794bcddecf18e44a68c6768624c50e4b8358e6e7d29] <==
	* I0531 19:10:28.119734       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0531 19:10:28.119755       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0531 19:10:28.119777       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0531 19:10:28.141900       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0531 19:10:28.141999       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0531 19:10:28.142169       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0531 19:10:28.160218       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0531 19:10:28.161662       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0531 19:10:28.241927       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 19:10:28.242063       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0531 19:10:28.242141       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0531 19:10:28.241952       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0531 19:10:28.241974       1 cache.go:39] Caches are synced for autoregister controller
	I0531 19:10:28.241983       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 19:10:28.252220       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0531 19:10:28.849074       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0531 19:10:29.116999       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0531 19:10:29.866347       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0531 19:10:30.670725       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0531 19:10:30.756175       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0531 19:10:30.763515       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0531 19:10:30.800486       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 19:10:30.805136       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 19:10:41.173880       1 controller.go:611] quota admission added evaluator for: endpoints
	I0531 19:10:41.274780       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [337c4b043f7f74b5831915c858cfe522d9659e4631528f3913b42baf22e578dd] <==
	* W0531 19:10:41.076092       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-preload-575369" does not exist
	I0531 19:10:41.082044       1 shared_informer.go:262] Caches are synced for node
	I0531 19:10:41.082071       1 range_allocator.go:173] Starting range CIDR allocator
	I0531 19:10:41.082075       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0531 19:10:41.082089       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0531 19:10:41.083153       1 shared_informer.go:262] Caches are synced for ephemeral
	I0531 19:10:41.116849       1 shared_informer.go:262] Caches are synced for TTL
	I0531 19:10:41.118024       1 shared_informer.go:262] Caches are synced for attach detach
	I0531 19:10:41.121410       1 shared_informer.go:262] Caches are synced for PVC protection
	I0531 19:10:41.124775       1 shared_informer.go:262] Caches are synced for GC
	I0531 19:10:41.141282       1 shared_informer.go:262] Caches are synced for taint
	I0531 19:10:41.141366       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0531 19:10:41.141377       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0531 19:10:41.141410       1 event.go:294] "Event occurred" object="test-preload-575369" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-575369 event: Registered Node test-preload-575369 in Controller"
	W0531 19:10:41.141476       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-575369. Assuming now as a timestamp.
	I0531 19:10:41.141579       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0531 19:10:41.150444       1 shared_informer.go:262] Caches are synced for persistent volume
	I0531 19:10:41.163643       1 shared_informer.go:262] Caches are synced for daemon sets
	I0531 19:10:41.171997       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0531 19:10:41.180374       1 shared_informer.go:262] Caches are synced for resource quota
	I0531 19:10:41.206679       1 shared_informer.go:262] Caches are synced for resource quota
	I0531 19:10:41.213843       1 shared_informer.go:262] Caches are synced for stateful set
	I0531 19:10:41.594371       1 shared_informer.go:262] Caches are synced for garbage collector
	I0531 19:10:41.647040       1 shared_informer.go:262] Caches are synced for garbage collector
	I0531 19:10:41.647065       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [863b15a8df41c012f3a57f5b55ab84df97e0e91e0b1d5dcd958df9aac81a3047] <==
	* I0531 19:10:29.847778       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0531 19:10:29.847827       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0531 19:10:29.847862       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0531 19:10:29.863704       1 server_others.go:206] "Using iptables Proxier"
	I0531 19:10:29.863732       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0531 19:10:29.863739       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0531 19:10:29.863751       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0531 19:10:29.863776       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0531 19:10:29.863899       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0531 19:10:29.864094       1 server.go:661] "Version info" version="v1.24.4"
	I0531 19:10:29.864111       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:10:29.864684       1 config.go:317] "Starting service config controller"
	I0531 19:10:29.864701       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0531 19:10:29.864960       1 config.go:444] "Starting node config controller"
	I0531 19:10:29.864975       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0531 19:10:29.865000       1 config.go:226] "Starting endpoint slice config controller"
	I0531 19:10:29.865012       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0531 19:10:29.964792       1 shared_informer.go:262] Caches are synced for service config
	I0531 19:10:29.965609       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0531 19:10:29.965623       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [201dc85e412dca76ddc499d597f857cdda32fd17bf8d8887490933f5cb4a712a] <==
	* I0531 19:10:25.358843       1 serving.go:348] Generated self-signed cert in-memory
	W0531 19:10:28.156117       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0531 19:10:28.156230       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 19:10:28.156268       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 19:10:28.156316       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 19:10:28.249126       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0531 19:10:28.249149       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:10:28.250495       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 19:10:28.250527       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 19:10:28.250622       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0531 19:10:28.250708       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0531 19:10:28.351016       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.250367    1015 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-575369"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.667067    1015 apiserver.go:52] "Watching apiserver"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.670258    1015 topology_manager.go:200] "Topology Admit Handler"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.670378    1015 topology_manager.go:200] "Topology Admit Handler"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.670478    1015 topology_manager.go:200] "Topology Admit Handler"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.670707    1015 topology_manager.go:200] "Topology Admit Handler"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.868699    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e56374b3-4de5-4172-9350-2eb0ebedb824-xtables-lock\") pod \"kube-proxy-bzwwc\" (UID: \"e56374b3-4de5-4172-9350-2eb0ebedb824\") " pod="kube-system/kube-proxy-bzwwc"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.868753    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e56374b3-4de5-4172-9350-2eb0ebedb824-lib-modules\") pod \"kube-proxy-bzwwc\" (UID: \"e56374b3-4de5-4172-9350-2eb0ebedb824\") " pod="kube-system/kube-proxy-bzwwc"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.868856    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7bk2\" (UniqueName: \"kubernetes.io/projected/e56374b3-4de5-4172-9350-2eb0ebedb824-kube-api-access-j7bk2\") pod \"kube-proxy-bzwwc\" (UID: \"e56374b3-4de5-4172-9350-2eb0ebedb824\") " pod="kube-system/kube-proxy-bzwwc"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.868961    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/409b11ce-c83a-495e-9760-5e263f8a9102-tmp\") pod \"storage-provisioner\" (UID: \"409b11ce-c83a-495e-9760-5e263f8a9102\") " pod="kube-system/storage-provisioner"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.868990    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42521aa1-c950-4042-a0bf-cc5307f70549-config-volume\") pod \"coredns-6d4b75cb6d-bwhc5\" (UID: \"42521aa1-c950-4042-a0bf-cc5307f70549\") " pod="kube-system/coredns-6d4b75cb6d-bwhc5"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.869026    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwns\" (UniqueName: \"kubernetes.io/projected/409b11ce-c83a-495e-9760-5e263f8a9102-kube-api-access-2kwns\") pod \"storage-provisioner\" (UID: \"409b11ce-c83a-495e-9760-5e263f8a9102\") " pod="kube-system/storage-provisioner"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.869134    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f485b948-a1c4-4d16-97b0-2c878b1d0c19-xtables-lock\") pod \"kindnet-788mf\" (UID: \"f485b948-a1c4-4d16-97b0-2c878b1d0c19\") " pod="kube-system/kindnet-788mf"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.869169    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f485b948-a1c4-4d16-97b0-2c878b1d0c19-lib-modules\") pod \"kindnet-788mf\" (UID: \"f485b948-a1c4-4d16-97b0-2c878b1d0c19\") " pod="kube-system/kindnet-788mf"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.869191    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e56374b3-4de5-4172-9350-2eb0ebedb824-kube-proxy\") pod \"kube-proxy-bzwwc\" (UID: \"e56374b3-4de5-4172-9350-2eb0ebedb824\") " pod="kube-system/kube-proxy-bzwwc"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.869212    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99lfz\" (UniqueName: \"kubernetes.io/projected/42521aa1-c950-4042-a0bf-cc5307f70549-kube-api-access-99lfz\") pod \"coredns-6d4b75cb6d-bwhc5\" (UID: \"42521aa1-c950-4042-a0bf-cc5307f70549\") " pod="kube-system/coredns-6d4b75cb6d-bwhc5"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.869232    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f485b948-a1c4-4d16-97b0-2c878b1d0c19-cni-cfg\") pod \"kindnet-788mf\" (UID: \"f485b948-a1c4-4d16-97b0-2c878b1d0c19\") " pod="kube-system/kindnet-788mf"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.869259    1015 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl987\" (UniqueName: \"kubernetes.io/projected/f485b948-a1c4-4d16-97b0-2c878b1d0c19-kube-api-access-jl987\") pod \"kindnet-788mf\" (UID: \"f485b948-a1c4-4d16-97b0-2c878b1d0c19\") " pod="kube-system/kindnet-788mf"
	May 31 19:10:28 test-preload-575369 kubelet[1015]: I0531 19:10:28.869303    1015 reconciler.go:159] "Reconciler: start to sync state"
	May 31 19:10:29 test-preload-575369 kubelet[1015]: W0531 19:10:29.595786    1015 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/160325a16b6df5b51e731884bc961442487ea89d8f0560499c2824bf1820d9d4/crio/crio-c0e15a6aebd2aae7d006480971fbad75985be47f41977820f61f8047e9a8b2be WatchSource:0}: Error finding container c0e15a6aebd2aae7d006480971fbad75985be47f41977820f61f8047e9a8b2be: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc00184e000 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x845800) %!!(MISSING)s(func() error=0x845900)}
	May 31 19:10:29 test-preload-575369 kubelet[1015]: W0531 19:10:29.617311    1015 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/160325a16b6df5b51e731884bc961442487ea89d8f0560499c2824bf1820d9d4/crio/crio-94c6669b29aebd2e0afa2462ecf02beb826cfb0cf44dad791d60568bdb3b0c31 WatchSource:0}: Error finding container 94c6669b29aebd2e0afa2462ecf02beb826cfb0cf44dad791d60568bdb3b0c31: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000627830 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x845800) %!!(MISSING)s(func() error=0x845900)}
	May 31 19:10:29 test-preload-575369 kubelet[1015]: W0531 19:10:29.886939    1015 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/160325a16b6df5b51e731884bc961442487ea89d8f0560499c2824bf1820d9d4/crio/crio-815d38fd4cf9e9aac023a29113bd65329490b9330238471949a5e05e9c8917b1 WatchSource:0}: Error finding container 815d38fd4cf9e9aac023a29113bd65329490b9330238471949a5e05e9c8917b1: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0017b2bd0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x845800) %!!(MISSING)s(func() error=0x845900)}
	May 31 19:10:30 test-preload-575369 kubelet[1015]: W0531 19:10:30.225021    1015 manager.go:1176] Failed to process watch event {EventType:0 Name:/docker/160325a16b6df5b51e731884bc961442487ea89d8f0560499c2824bf1820d9d4/crio/crio-f59163bd13f02ec021df921e0a508faba6a111ecca73b8b5800ca9bc24b7fbcb WatchSource:0}: Error finding container f59163bd13f02ec021df921e0a508faba6a111ecca73b8b5800ca9bc24b7fbcb: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0012a0228 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x845800) %!!(MISSING)s(func() error=0x845900)}
	May 31 19:10:30 test-preload-575369 kubelet[1015]: I0531 19:10:30.850051    1015 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
	May 31 19:10:38 test-preload-575369 kubelet[1015]: I0531 19:10:38.177768    1015 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
	
	* 
	* ==> storage-provisioner [61e4cb2d8fd619d1565797633eca687e6902c3733dfc989ae36bc530967fa48e] <==
	* I0531 19:10:30.391378       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 19:10:30.449984       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 19:10:30.450028       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 19:10:47.886841       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 19:10:47.886909       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56bd9034-1b1d-4ed9-9ccc-570eb2617d14", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' test-preload-575369_6414f4b9-1631-4135-832a-3639620c8286 became leader
	I0531 19:10:47.887033       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_test-preload-575369_6414f4b9-1631-4135-832a-3639620c8286!
	I0531 19:10:47.987760       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_test-preload-575369_6414f4b9-1631-4135-832a-3639620c8286!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-575369 -n test-preload-575369
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-575369 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-575369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-575369
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-575369: (2.246477828s)
--- FAIL: TestPreload (149.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (75.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.9.0.3205004325.exe start -p running-upgrade-731337 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.9.0.3205004325.exe start -p running-upgrade-731337 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m9.312312853s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-731337 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-731337 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.883879446s)

                                                
                                                
-- stdout --
	* [running-upgrade-731337] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-731337 in cluster running-upgrade-731337
	* Pulling base image ...
	* Updating the running docker "running-upgrade-731337" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:15:09.531486  181938 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:15:09.531652  181938 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:15:09.531661  181938 out.go:309] Setting ErrFile to fd 2...
	I0531 19:15:09.531668  181938 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:15:09.531834  181938 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	I0531 19:15:09.532638  181938 out.go:303] Setting JSON to false
	I0531 19:15:09.534607  181938 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3459,"bootTime":1685557051,"procs":805,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 19:15:09.534667  181938 start.go:137] virtualization: kvm guest
	I0531 19:15:09.537763  181938 out.go:177] * [running-upgrade-731337] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 19:15:09.539575  181938 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:15:09.541806  181938 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:15:09.539589  181938 notify.go:220] Checking for updates...
	I0531 19:15:09.543714  181938 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:15:09.545478  181938 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	I0531 19:15:09.547558  181938 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 19:15:09.549564  181938 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:15:09.551908  181938 config.go:182] Loaded profile config "running-upgrade-731337": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0531 19:15:09.551936  181938 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0531 19:15:09.557121  181938 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0531 19:15:09.558850  181938 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:15:09.585643  181938 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:15:09.585732  181938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:15:09.640696  181938 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:96 SystemTime:2023-05-31 19:15:09.630415231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 19:15:09.640813  181938 docker.go:294] overlay module found
	I0531 19:15:09.643306  181938 out.go:177] * Using the docker driver based on existing profile
	I0531 19:15:09.645209  181938 start.go:297] selected driver: docker
	I0531 19:15:09.645229  181938 start.go:875] validating driver "docker" against &{Name:running-upgrade-731337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-731337 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:15:09.645335  181938 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:15:09.646131  181938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:15:09.717832  181938 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:true NGoroutines:96 SystemTime:2023-05-31 19:15:09.705193319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 19:15:09.718242  181938 cni.go:84] Creating CNI manager for ""
	I0531 19:15:09.718271  181938 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0531 19:15:09.718284  181938 start_flags.go:319] config:
	{Name:running-upgrade-731337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-731337 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:15:09.721535  181938 out.go:177] * Starting control plane node running-upgrade-731337 in cluster running-upgrade-731337
	I0531 19:15:09.723386  181938 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:15:09.725356  181938 out.go:177] * Pulling base image ...
	I0531 19:15:09.727010  181938 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0531 19:15:09.727039  181938 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 19:15:09.749741  181938 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 19:15:09.749780  181938 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	W0531 19:15:09.750656  181938 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0531 19:15:09.750845  181938 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/running-upgrade-731337/config.json ...
	I0531 19:15:09.751104  181938 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:15:09.751142  181938 start.go:364] acquiring machines lock for running-upgrade-731337: {Name:mk71fa502e06f1cf3e941386d3ef05f9053eae5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:15:09.751250  181938 start.go:368] acquired machines lock for "running-upgrade-731337" in 67.184µs
	I0531 19:15:09.751274  181938 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:15:09.751280  181938 fix.go:55] fixHost starting: m01
	I0531 19:15:09.751563  181938 cli_runner.go:164] Run: docker container inspect running-upgrade-731337 --format={{.State.Status}}
	I0531 19:15:09.751837  181938 cache.go:107] acquiring lock: {Name:mkb7f3600ae80e4e74cf23a517c08c15646bd580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:15:09.751870  181938 cache.go:107] acquiring lock: {Name:mk6960ea70c07a6f869612360fc2b00ca856b2ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:15:09.751874  181938 cache.go:107] acquiring lock: {Name:mk1edd173f65f616e697f96c44deb4e47a8b3b87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:15:09.751917  181938 cache.go:115] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0531 19:15:09.751917  181938 cache.go:115] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0531 19:15:09.751927  181938 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 59.558µs
	I0531 19:15:09.751945  181938 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0531 19:15:09.751927  181938 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 95.327µs
	I0531 19:15:09.751945  181938 cache.go:107] acquiring lock: {Name:mk78f7f67cc667e67dac6eb4a7e4ca6786150835 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:15:09.751961  181938 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0531 19:15:09.751962  181938 cache.go:115] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0531 19:15:09.751932  181938 cache.go:107] acquiring lock: {Name:mkef774c707d89aabb9116317d06276ae465a573 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:15:09.751952  181938 cache.go:107] acquiring lock: {Name:mk2ab8d0adfcc87f523cb56fbd6186a454def42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:15:09.751975  181938 cache.go:107] acquiring lock: {Name:mk93805810d4da23ca5d0d221d9a815489ae5f0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:15:09.751985  181938 cache.go:115] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0531 19:15:09.751972  181938 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 113.691µs
	I0531 19:15:09.752023  181938 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0531 19:15:09.752013  181938 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 69.308µs
	I0531 19:15:09.752047  181938 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0531 19:15:09.751837  181938 cache.go:107] acquiring lock: {Name:mk4fd277c4cfe6f1bec4b7a423a87b562dc5b7c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:15:09.752060  181938 cache.go:115] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0531 19:15:09.752092  181938 cache.go:115] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0531 19:15:09.752098  181938 cache.go:115] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0531 19:15:09.752087  181938 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 172.547µs
	I0531 19:15:09.752116  181938 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 283.254µs
	I0531 19:15:09.752126  181938 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0531 19:15:09.752128  181938 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0531 19:15:09.752114  181938 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 139.415µs
	I0531 19:15:09.752147  181938 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0531 19:15:09.752099  181938 cache.go:115] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0531 19:15:09.752158  181938 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 266.608µs
	I0531 19:15:09.752167  181938 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0531 19:15:09.752175  181938 cache.go:87] Successfully saved all images to host disk.
	I0531 19:15:09.777954  181938 fix.go:103] recreateIfNeeded on running-upgrade-731337: state=Running err=<nil>
	W0531 19:15:09.777986  181938 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 19:15:09.781050  181938 out.go:177] * Updating the running docker "running-upgrade-731337" container ...
	I0531 19:15:09.783644  181938 machine.go:88] provisioning docker machine ...
	I0531 19:15:09.783677  181938 ubuntu.go:169] provisioning hostname "running-upgrade-731337"
	I0531 19:15:09.783735  181938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-731337
	I0531 19:15:09.802850  181938 main.go:141] libmachine: Using SSH client type: native
	I0531 19:15:09.803458  181938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0531 19:15:09.803547  181938 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-731337 && echo "running-upgrade-731337" | sudo tee /etc/hostname
	I0531 19:15:09.922025  181938 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-731337
	
	I0531 19:15:09.922119  181938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-731337
	I0531 19:15:09.946031  181938 main.go:141] libmachine: Using SSH client type: native
	I0531 19:15:09.946440  181938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0531 19:15:09.946457  181938 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-731337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-731337/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-731337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:15:10.056547  181938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:15:10.056580  181938 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-7270/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-7270/.minikube}
	I0531 19:15:10.056606  181938 ubuntu.go:177] setting up certificates
	I0531 19:15:10.056623  181938 provision.go:83] configureAuth start
	I0531 19:15:10.056680  181938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-731337
	I0531 19:15:10.079216  181938 provision.go:138] copyHostCerts
	I0531 19:15:10.079271  181938 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem, removing ...
	I0531 19:15:10.079284  181938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem
	I0531 19:15:10.079369  181938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem (1123 bytes)
	I0531 19:15:10.079488  181938 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem, removing ...
	I0531 19:15:10.079495  181938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem
	I0531 19:15:10.079523  181938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem (1675 bytes)
	I0531 19:15:10.079581  181938 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem, removing ...
	I0531 19:15:10.079587  181938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem
	I0531 19:15:10.079615  181938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem (1078 bytes)
	I0531 19:15:10.079658  181938 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-731337 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-731337]
	I0531 19:15:10.396243  181938 provision.go:172] copyRemoteCerts
	I0531 19:15:10.396325  181938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:15:10.396367  181938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-731337
	I0531 19:15:10.421629  181938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/running-upgrade-731337/id_rsa Username:docker}
	I0531 19:15:10.503294  181938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0531 19:15:10.519038  181938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 19:15:10.534874  181938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:15:10.558533  181938 provision.go:86] duration metric: configureAuth took 501.894674ms
	I0531 19:15:10.558566  181938 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:15:10.558774  181938 config.go:182] Loaded profile config "running-upgrade-731337": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0531 19:15:10.558907  181938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-731337
	I0531 19:15:10.585807  181938 main.go:141] libmachine: Using SSH client type: native
	I0531 19:15:10.586460  181938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0531 19:15:10.586493  181938 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:15:11.118737  181938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:15:11.118764  181938 machine.go:91] provisioned docker machine in 1.335105235s
	I0531 19:15:11.118774  181938 start.go:300] post-start starting for "running-upgrade-731337" (driver="docker")
	I0531 19:15:11.118780  181938 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:15:11.118833  181938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:15:11.118869  181938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-731337
	I0531 19:15:11.135849  181938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/running-upgrade-731337/id_rsa Username:docker}
	I0531 19:15:11.223731  181938 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:15:11.226777  181938 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:15:11.226806  181938 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:15:11.226819  181938 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:15:11.226826  181938 info.go:137] Remote host: Ubuntu 19.10
	I0531 19:15:11.226835  181938 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/addons for local assets ...
	I0531 19:15:11.226892  181938 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/files for local assets ...
	I0531 19:15:11.226986  181938 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> 142322.pem in /etc/ssl/certs
	I0531 19:15:11.227086  181938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:15:11.233742  181938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem --> /etc/ssl/certs/142322.pem (1708 bytes)
	I0531 19:15:11.254101  181938 start.go:303] post-start completed in 135.312284ms
	I0531 19:15:11.254182  181938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:15:11.254241  181938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-731337
	I0531 19:15:11.281760  181938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/running-upgrade-731337/id_rsa Username:docker}
	I0531 19:15:11.365549  181938 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:15:11.374977  181938 fix.go:57] fixHost completed within 1.623689117s
	I0531 19:15:11.375009  181938 start.go:83] releasing machines lock for "running-upgrade-731337", held for 1.62374029s
	I0531 19:15:11.375081  181938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-731337
	I0531 19:15:11.409711  181938 ssh_runner.go:195] Run: cat /version.json
	I0531 19:15:11.409777  181938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-731337
	I0531 19:15:11.410040  181938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:15:11.410110  181938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-731337
	I0531 19:15:11.429447  181938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/running-upgrade-731337/id_rsa Username:docker}
	I0531 19:15:11.430496  181938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/running-upgrade-731337/id_rsa Username:docker}
	W0531 19:15:11.517088  181938 start.go:414] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0531 19:15:11.517182  181938 ssh_runner.go:195] Run: systemctl --version
	I0531 19:15:11.553712  181938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:15:11.605897  181938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:15:11.610236  181938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:15:11.625993  181938 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 19:15:11.626076  181938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:15:11.650796  181938 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 19:15:11.650815  181938 start.go:481] detecting cgroup driver to use...
	I0531 19:15:11.650845  181938 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 19:15:11.650884  181938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:15:11.677414  181938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:15:11.687543  181938 docker.go:193] disabling cri-docker service (if available) ...
	I0531 19:15:11.687589  181938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:15:11.700439  181938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:15:11.709540  181938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0531 19:15:11.718551  181938 docker.go:203] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0531 19:15:11.718620  181938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:15:11.801948  181938 docker.go:209] disabling docker service ...
	I0531 19:15:11.802017  181938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:15:11.811618  181938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:15:11.821117  181938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:15:11.910259  181938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:15:12.006558  181938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:15:12.015533  181938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:15:12.062499  181938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0531 19:15:12.062562  181938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:15:12.148260  181938 out.go:177] 
	W0531 19:15:12.211851  181938 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0531 19:15:12.211881  181938 out.go:239] * 
	* 
	W0531 19:15:12.213088  181938 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 19:15:12.274527  181938 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-731337 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-05-31 19:15:12.367651127 +0000 UTC m=+1899.626006563
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-731337
helpers_test.go:235: (dbg) docker inspect running-upgrade-731337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "620853b12f0c9e2b24c916fb0950fdc309a81186bce8813aabf6a41d67942d06",
	        "Created": "2023-05-31T19:14:00.520156216Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 160728,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-05-31T19:14:01.090456708Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/620853b12f0c9e2b24c916fb0950fdc309a81186bce8813aabf6a41d67942d06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/620853b12f0c9e2b24c916fb0950fdc309a81186bce8813aabf6a41d67942d06/hostname",
	        "HostsPath": "/var/lib/docker/containers/620853b12f0c9e2b24c916fb0950fdc309a81186bce8813aabf6a41d67942d06/hosts",
	        "LogPath": "/var/lib/docker/containers/620853b12f0c9e2b24c916fb0950fdc309a81186bce8813aabf6a41d67942d06/620853b12f0c9e2b24c916fb0950fdc309a81186bce8813aabf6a41d67942d06-json.log",
	        "Name": "/running-upgrade-731337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "running-upgrade-731337:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/14588386f90caf92a771b97ecc5760fc735b1ddfb35fc60d07e719cee43408f6-init/diff:/var/lib/docker/overlay2/42ea151507921dda9aaac2a600e3e73cf4192f2b0a21f5ee8523108ca97db676/diff:/var/lib/docker/overlay2/19ff5ab33fa84a10dd683e273076101dd2e30b7256bf9dc4f4b2ff6b8a9156b5/diff:/var/lib/docker/overlay2/ed84e05d70be50d1020627f902d7c95df42339368a66bbed3e932a36cecdff46/diff:/var/lib/docker/overlay2/bc0b60e4a49f31afcf420c17dbd950e2002a5540a988eb427d7619770c874a6c/diff:/var/lib/docker/overlay2/f5071cd2b9dc279fa1d84cc126d3660ef64cd9a2cb347b53e6adaf32316e96a1/diff:/var/lib/docker/overlay2/b53015d64151c3893545aa87c38dea817da245b4d6f87fdcf1d1a5c7a9d8c620/diff:/var/lib/docker/overlay2/53a174938c0177197e18630cb7d9ff6e5829891d1086c381a90e5aac7aa89621/diff:/var/lib/docker/overlay2/2fd9396f772c9a16812f8f969e52e56a171d564405005d978b45493a6e5df0be/diff:/var/lib/docker/overlay2/76fc1e90387246944406b54694bc2bf7709387bf814cb9e9d542410f59abae85/diff:/var/lib/docker/overlay2/5f7cb4
e0ce6f0dacaa7d4e4dc21c03a7712b7ea59ba4313d33e4a746a4697dd4/diff:/var/lib/docker/overlay2/cfcedf5bec0cf1c2e7c95641a7f971227dd254b2d90e6a1650eaa8d531f0f94a/diff:/var/lib/docker/overlay2/23fc3e3f809079eda6e414d833b673e4c21e5d47c15be232aa339d8129828526/diff:/var/lib/docker/overlay2/4dc401c28fe7dda4199cddc517b2c1771c637b1b3873ec959d26e6a975930891/diff:/var/lib/docker/overlay2/bb6f29f157c0ad9f21f4aa66745eb94cc3874c858149b24afb1dacced2d61e2f/diff:/var/lib/docker/overlay2/a0f1121581f38624746acae53679c0e87ef756f7813ceceefe1b60db78cbdadb/diff:/var/lib/docker/overlay2/48c9a6ad56e6e879f3b85c1d3b70b6ae9f0d03ba16477691b7747d5a2301662d/diff:/var/lib/docker/overlay2/4d16d2cb6d63fbcde5bf8f57e13d30279e7f72347901060793d6af7ab7496159/diff:/var/lib/docker/overlay2/b0aa3fbda322c743e79aed94ab113d7fa42eb6ff1332a0effe49ec5ed7b47372/diff:/var/lib/docker/overlay2/549bbba45cff2d1207d333b751efd9889c33d702075c704008775939129167ff/diff:/var/lib/docker/overlay2/d03749a8304a3ccf1f210776bb7eac30806966b8ce46a6e2064469b48489a3bb/diff:/var/lib/d
ocker/overlay2/3545ca2ccf084740cc4d73745a325fb84a0f6ed18dbd9be2ff5eb71c73909c44/diff",
	                "MergedDir": "/var/lib/docker/overlay2/14588386f90caf92a771b97ecc5760fc735b1ddfb35fc60d07e719cee43408f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/14588386f90caf92a771b97ecc5760fc735b1ddfb35fc60d07e719cee43408f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/14588386f90caf92a771b97ecc5760fc735b1ddfb35fc60d07e719cee43408f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-731337",
	                "Source": "/var/lib/docker/volumes/running-upgrade-731337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-731337",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-731337",
	                "name.minikube.sigs.k8s.io": "running-upgrade-731337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b964a81280bd1becc31da3ac4f246bdbc89c42aa0265c9de8d8e0839f6c81e60",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32938"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32937"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32936"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b964a81280bd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "2ea274fd1f3595880dece10a3a412f5b61becda9edac0dddc8324546641a9194",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "c1695004d2d6b596533883e3a9f39814c28cc8bc3006f75edc4cfb9817b03b4d",
	                    "EndpointID": "2ea274fd1f3595880dece10a3a412f5b61becda9edac0dddc8324546641a9194",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-731337 -n running-upgrade-731337
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-731337 -n running-upgrade-731337: exit status 4 (284.858784ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:15:12.637510  183336 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-731337" does not appear in /home/jenkins/minikube-integration/16569-7270/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-731337" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-731337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-731337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-731337: (2.726066921s)
--- FAIL: TestRunningBinaryUpgrade (75.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.1552927497.exe start -p stopped-upgrade-360822 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0531 19:13:41.009216   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.1552927497.exe start -p stopped-upgrade-360822 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m20.188296613s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.9.0.1552927497.exe -p stopped-upgrade-360822 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.9.0.1552927497.exe -p stopped-upgrade-360822 stop: (12.76050856s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-360822 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-360822 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (7.019477088s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-360822] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-360822 in cluster stopped-upgrade-360822
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-360822" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:14:39.994839  172522 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:14:39.995010  172522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:14:39.995021  172522 out.go:309] Setting ErrFile to fd 2...
	I0531 19:14:39.995027  172522 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:14:39.995167  172522 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	I0531 19:14:39.995914  172522 out.go:303] Setting JSON to false
	I0531 19:14:39.997765  172522 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3429,"bootTime":1685557051,"procs":769,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 19:14:39.997846  172522 start.go:137] virtualization: kvm guest
	I0531 19:14:40.000758  172522 out.go:177] * [stopped-upgrade-360822] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 19:14:40.003461  172522 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:14:40.003494  172522 notify.go:220] Checking for updates...
	I0531 19:14:40.005307  172522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:14:40.007870  172522 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:14:40.009585  172522 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	I0531 19:14:40.011147  172522 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 19:14:40.012738  172522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:14:40.014930  172522 config.go:182] Loaded profile config "stopped-upgrade-360822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0531 19:14:40.014960  172522 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8
	I0531 19:14:40.017693  172522 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0531 19:14:40.019653  172522 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:14:40.061698  172522 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:14:40.061803  172522 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:14:40.149977  172522 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:115 OomKillDisable:true NGoroutines:98 SystemTime:2023-05-31 19:14:40.139588556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 19:14:40.150069  172522 docker.go:294] overlay module found
	I0531 19:14:40.152419  172522 out.go:177] * Using the docker driver based on existing profile
	I0531 19:14:40.154329  172522 start.go:297] selected driver: docker
	I0531 19:14:40.154345  172522 start.go:875] validating driver "docker" against &{Name:stopped-upgrade-360822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-360822 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:14:40.154445  172522 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:14:40.155595  172522 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:14:40.217156  172522 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:115 OomKillDisable:true NGoroutines:98 SystemTime:2023-05-31 19:14:40.205679294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 19:14:40.217505  172522 cni.go:84] Creating CNI manager for ""
	I0531 19:14:40.217523  172522 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0531 19:14:40.217531  172522 start_flags.go:319] config:
	{Name:stopped-upgrade-360822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-360822 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 19:14:40.220141  172522 out.go:177] * Starting control plane node stopped-upgrade-360822 in cluster stopped-upgrade-360822
	I0531 19:14:40.222000  172522 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 19:14:40.224038  172522 out.go:177] * Pulling base image ...
	I0531 19:14:40.225875  172522 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0531 19:14:40.225945  172522 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 19:14:40.248065  172522 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon, skipping pull
	I0531 19:14:40.248092  172522 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in daemon, skipping load
	W0531 19:14:40.299048  172522 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0531 19:14:40.299221  172522 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/stopped-upgrade-360822/config.json ...
	I0531 19:14:40.299332  172522 cache.go:107] acquiring lock: {Name:mkb7f3600ae80e4e74cf23a517c08c15646bd580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:14:40.299438  172522 cache.go:115] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0531 19:14:40.299450  172522 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 138.026µs
	I0531 19:14:40.299463  172522 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0531 19:14:40.299473  172522 cache.go:195] Successfully downloaded all kic artifacts
	I0531 19:14:40.299476  172522 cache.go:107] acquiring lock: {Name:mk2ab8d0adfcc87f523cb56fbd6186a454def42f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:14:40.299499  172522 start.go:364] acquiring machines lock for stopped-upgrade-360822: {Name:mkb01ac09f884443e2f60902d66c0d4df7e43662 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:14:40.299563  172522 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I0531 19:14:40.299569  172522 start.go:368] acquired machines lock for "stopped-upgrade-360822" in 56.305µs
	I0531 19:14:40.299582  172522 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:14:40.299587  172522 fix.go:55] fixHost starting: m01
	I0531 19:14:40.299717  172522 cache.go:107] acquiring lock: {Name:mk1edd173f65f616e697f96c44deb4e47a8b3b87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:14:40.299743  172522 cache.go:107] acquiring lock: {Name:mk93805810d4da23ca5d0d221d9a815489ae5f0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:14:40.299795  172522 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0531 19:14:40.299852  172522 cli_runner.go:164] Run: docker container inspect stopped-upgrade-360822 --format={{.State.Status}}
	I0531 19:14:40.299853  172522 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0531 19:14:40.299911  172522 cache.go:107] acquiring lock: {Name:mk78f7f67cc667e67dac6eb4a7e4ca6786150835 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:14:40.299988  172522 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0531 19:14:40.299998  172522 cache.go:107] acquiring lock: {Name:mkef774c707d89aabb9116317d06276ae465a573 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:14:40.300063  172522 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0531 19:14:40.300155  172522 cache.go:107] acquiring lock: {Name:mk4fd277c4cfe6f1bec4b7a423a87b562dc5b7c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:14:40.300227  172522 cache.go:107] acquiring lock: {Name:mk6960ea70c07a6f869612360fc2b00ca856b2ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:14:40.300239  172522 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I0531 19:14:40.300288  172522 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I0531 19:14:40.301419  172522 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0531 19:14:40.301464  172522 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0531 19:14:40.301662  172522 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I0531 19:14:40.301760  172522 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I0531 19:14:40.301833  172522 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0531 19:14:40.301854  172522 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0531 19:14:40.301919  172522 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I0531 19:14:40.329972  172522 fix.go:103] recreateIfNeeded on stopped-upgrade-360822: state=Stopped err=<nil>
	W0531 19:14:40.330010  172522 fix.go:129] unexpected machine state, will restart: <nil>
	I0531 19:14:40.332361  172522 out.go:177] * Restarting existing docker container for "stopped-upgrade-360822" ...
	I0531 19:14:40.334371  172522 cli_runner.go:164] Run: docker start stopped-upgrade-360822
	I0531 19:14:40.496793  172522 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0531 19:14:40.501994  172522 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0531 19:14:40.511902  172522 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I0531 19:14:40.514587  172522 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I0531 19:14:40.552758  172522 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0531 19:14:40.559742  172522 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I0531 19:14:40.578562  172522 cache.go:162] opening:  /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I0531 19:14:40.657571  172522 cache.go:157] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0531 19:14:40.657597  172522 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 357.874697ms
	I0531 19:14:40.657608  172522 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0531 19:14:40.669859  172522 cli_runner.go:164] Run: docker container inspect stopped-upgrade-360822 --format={{.State.Status}}
	I0531 19:14:40.705964  172522 kic.go:426] container "stopped-upgrade-360822" state is running.
	I0531 19:14:40.707963  172522 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-360822
	I0531 19:14:40.735381  172522 profile.go:148] Saving config to /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/stopped-upgrade-360822/config.json ...
	I0531 19:14:40.772920  172522 machine.go:88] provisioning docker machine ...
	I0531 19:14:40.772993  172522 ubuntu.go:169] provisioning hostname "stopped-upgrade-360822"
	I0531 19:14:40.773113  172522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-360822
	I0531 19:14:40.807686  172522 main.go:141] libmachine: Using SSH client type: native
	I0531 19:14:40.808963  172522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I0531 19:14:40.809006  172522 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-360822 && echo "stopped-upgrade-360822" | sudo tee /etc/hostname
	I0531 19:14:40.810321  172522 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0531 19:14:41.056574  172522 cache.go:157] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0531 19:14:41.056605  172522 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 756.695139ms
	I0531 19:14:41.056622  172522 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0531 19:14:41.628863  172522 cache.go:157] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0531 19:14:41.628905  172522 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.328678953s
	I0531 19:14:41.628921  172522 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0531 19:14:41.719732  172522 cache.go:157] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0531 19:14:41.719762  172522 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.419766057s
	I0531 19:14:41.719777  172522 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0531 19:14:42.020279  172522 cache.go:157] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0531 19:14:42.020474  172522 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.72099763s
	I0531 19:14:42.020493  172522 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0531 19:14:42.271494  172522 cache.go:157] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0531 19:14:42.271520  172522 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.971371761s
	I0531 19:14:42.271531  172522 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0531 19:14:43.594896  172522 cache.go:157] /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0531 19:14:43.594922  172522 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 3.295213152s
	I0531 19:14:43.594933  172522 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0531 19:14:43.594951  172522 cache.go:87] Successfully saved all images to host disk.
	I0531 19:14:43.935428  172522 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-360822
	
	I0531 19:14:43.935512  172522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-360822
	I0531 19:14:43.962090  172522 main.go:141] libmachine: Using SSH client type: native
	I0531 19:14:43.962729  172522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I0531 19:14:43.962773  172522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-360822' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-360822/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-360822' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:14:44.078156  172522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:14:44.078189  172522 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16569-7270/.minikube CaCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16569-7270/.minikube}
	I0531 19:14:44.078221  172522 ubuntu.go:177] setting up certificates
	I0531 19:14:44.078234  172522 provision.go:83] configureAuth start
	I0531 19:14:44.078298  172522 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-360822
	I0531 19:14:44.101529  172522 provision.go:138] copyHostCerts
	I0531 19:14:44.101593  172522 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem, removing ...
	I0531 19:14:44.101608  172522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem
	I0531 19:14:44.101675  172522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/ca.pem (1078 bytes)
	I0531 19:14:44.101780  172522 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem, removing ...
	I0531 19:14:44.101790  172522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem
	I0531 19:14:44.101825  172522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/cert.pem (1123 bytes)
	I0531 19:14:44.101947  172522 exec_runner.go:144] found /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem, removing ...
	I0531 19:14:44.101961  172522 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem
	I0531 19:14:44.101993  172522 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16569-7270/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16569-7270/.minikube/key.pem (1675 bytes)
	I0531 19:14:44.102049  172522 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-360822 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-360822]
	I0531 19:14:44.223186  172522 provision.go:172] copyRemoteCerts
	I0531 19:14:44.223248  172522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:14:44.223291  172522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-360822
	I0531 19:14:44.246514  172522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/stopped-upgrade-360822/id_rsa Username:docker}
	I0531 19:14:44.335869  172522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0531 19:14:44.355990  172522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0531 19:14:44.375657  172522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:14:44.394225  172522 provision.go:86] duration metric: configureAuth took 315.977099ms
	I0531 19:14:44.394293  172522 ubuntu.go:193] setting minikube options for container-runtime
	I0531 19:14:44.394449  172522 config.go:182] Loaded profile config "stopped-upgrade-360822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0531 19:14:44.394577  172522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-360822
	I0531 19:14:44.416430  172522 main.go:141] libmachine: Using SSH client type: native
	I0531 19:14:44.416874  172522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32954 <nil> <nil>}
	I0531 19:14:44.416896  172522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:14:46.117302  172522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:14:46.117327  172522 machine.go:91] provisioned docker machine in 5.344369731s
	I0531 19:14:46.117338  172522 start.go:300] post-start starting for "stopped-upgrade-360822" (driver="docker")
	I0531 19:14:46.117346  172522 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:14:46.117410  172522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:14:46.117447  172522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-360822
	I0531 19:14:46.142179  172522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/stopped-upgrade-360822/id_rsa Username:docker}
	I0531 19:14:46.224189  172522 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:14:46.227077  172522 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0531 19:14:46.227107  172522 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0531 19:14:46.227121  172522 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0531 19:14:46.227129  172522 info.go:137] Remote host: Ubuntu 19.10
	I0531 19:14:46.227141  172522 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/addons for local assets ...
	I0531 19:14:46.227202  172522 filesync.go:126] Scanning /home/jenkins/minikube-integration/16569-7270/.minikube/files for local assets ...
	I0531 19:14:46.227288  172522 filesync.go:149] local asset: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem -> 142322.pem in /etc/ssl/certs
	I0531 19:14:46.227396  172522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:14:46.234415  172522 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/ssl/certs/142322.pem --> /etc/ssl/certs/142322.pem (1708 bytes)
	I0531 19:14:46.252487  172522 start.go:303] post-start completed in 135.134013ms
	I0531 19:14:46.252566  172522 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:14:46.252608  172522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-360822
	I0531 19:14:46.274748  172522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/stopped-upgrade-360822/id_rsa Username:docker}
	I0531 19:14:46.361018  172522 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0531 19:14:46.365134  172522 fix.go:57] fixHost completed within 6.065540532s
	I0531 19:14:46.365158  172522 start.go:83] releasing machines lock for "stopped-upgrade-360822", held for 6.065580607s
	I0531 19:14:46.365221  172522 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-360822
	I0531 19:14:46.383256  172522 ssh_runner.go:195] Run: cat /version.json
	I0531 19:14:46.383307  172522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-360822
	I0531 19:14:46.383389  172522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:14:46.383459  172522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-360822
	I0531 19:14:46.401802  172522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/stopped-upgrade-360822/id_rsa Username:docker}
	I0531 19:14:46.403017  172522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32954 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/stopped-upgrade-360822/id_rsa Username:docker}
	W0531 19:14:46.511782  172522 start.go:414] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0531 19:14:46.511865  172522 ssh_runner.go:195] Run: systemctl --version
	I0531 19:14:46.516266  172522 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:14:46.571032  172522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:14:46.576177  172522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:14:46.594126  172522 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0531 19:14:46.594205  172522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:14:46.618533  172522 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 19:14:46.618559  172522 start.go:481] detecting cgroup driver to use...
	I0531 19:14:46.618589  172522 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0531 19:14:46.618631  172522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:14:46.639664  172522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:14:46.652719  172522 docker.go:193] disabling cri-docker service (if available) ...
	I0531 19:14:46.652772  172522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:14:46.662626  172522 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:14:46.672195  172522 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0531 19:14:46.682402  172522 docker.go:203] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0531 19:14:46.682470  172522 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:14:46.753992  172522 docker.go:209] disabling docker service ...
	I0531 19:14:46.754059  172522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:14:46.764810  172522 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:14:46.774768  172522 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:14:46.845867  172522 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:14:46.922700  172522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:14:46.934392  172522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:14:46.947387  172522 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0531 19:14:46.947442  172522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:14:46.958270  172522 out.go:177] 
	W0531 19:14:46.960209  172522 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0531 19:14:46.960229  172522 out.go:239] * 
	* 
	W0531 19:14:46.961267  172522 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 19:14:46.963291  172522 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-360822 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (99.97s)

                                                
                                    

Test pass (272/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.89
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.2/json-events 6.52
11 TestDownloadOnly/v1.27.2/preload-exists 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.18
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
18 TestDownloadOnlyKic 1.16
19 TestBinaryMirror 0.7
20 TestOffline 67.46
22 TestAddons/Setup 122.02
24 TestAddons/parallel/Registry 13.28
26 TestAddons/parallel/InspektorGadget 10.62
27 TestAddons/parallel/MetricsServer 5.45
28 TestAddons/parallel/HelmTiller 9.19
30 TestAddons/parallel/CSI 52.28
31 TestAddons/parallel/Headlamp 10.97
32 TestAddons/parallel/CloudSpanner 5.29
35 TestAddons/serial/GCPAuth/Namespaces 0.11
36 TestAddons/StoppedEnableDisable 12.06
37 TestCertOptions 33.13
38 TestCertExpiration 233.4
40 TestForceSystemdFlag 24.67
41 TestForceSystemdEnv 38.99
42 TestKVMDriverInstallOrUpdate 2.78
46 TestErrorSpam/setup 23.32
47 TestErrorSpam/start 0.55
48 TestErrorSpam/status 0.8
49 TestErrorSpam/pause 1.41
50 TestErrorSpam/unpause 1.48
51 TestErrorSpam/stop 1.35
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 69.14
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 44.78
58 TestFunctional/serial/KubeContext 0.04
59 TestFunctional/serial/KubectlGetPods 0.07
62 TestFunctional/serial/CacheCmd/cache/add_remote 2.68
63 TestFunctional/serial/CacheCmd/cache/add_local 1.11
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
65 TestFunctional/serial/CacheCmd/cache/list 0.04
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
67 TestFunctional/serial/CacheCmd/cache/cache_reload 1.51
68 TestFunctional/serial/CacheCmd/cache/delete 0.09
69 TestFunctional/serial/MinikubeKubectlCmd 0.1
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
71 TestFunctional/serial/ExtraConfig 32.79
72 TestFunctional/serial/ComponentHealth 0.06
73 TestFunctional/serial/LogsCmd 1.36
74 TestFunctional/serial/LogsFileCmd 1.39
76 TestFunctional/parallel/ConfigCmd 0.38
77 TestFunctional/parallel/DashboardCmd 12.65
78 TestFunctional/parallel/DryRun 0.46
79 TestFunctional/parallel/InternationalLanguage 0.18
80 TestFunctional/parallel/StatusCmd 1.4
84 TestFunctional/parallel/ServiceCmdConnect 6.7
85 TestFunctional/parallel/AddonsCmd 0.14
86 TestFunctional/parallel/PersistentVolumeClaim 28.56
88 TestFunctional/parallel/SSHCmd 0.59
89 TestFunctional/parallel/CpCmd 1.21
90 TestFunctional/parallel/MySQL 21.92
91 TestFunctional/parallel/FileSync 0.37
92 TestFunctional/parallel/CertSync 1.64
96 TestFunctional/parallel/NodeLabels 0.06
98 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
100 TestFunctional/parallel/License 0.22
101 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
102 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
103 TestFunctional/parallel/MountCmd/any-port 8.27
104 TestFunctional/parallel/ProfileCmd/profile_list 0.38
105 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
106 TestFunctional/parallel/Version/short 0.04
107 TestFunctional/parallel/Version/components 0.47
108 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
109 TestFunctional/parallel/ImageCommands/ImageListTable 0.48
110 TestFunctional/parallel/ImageCommands/ImageListJson 0.41
111 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
113 TestFunctional/parallel/ImageCommands/Setup 0.98
114 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.23
115 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.9
116 TestFunctional/parallel/MountCmd/specific-port 1.71
117 TestFunctional/parallel/MountCmd/VerifyCleanup 1.68
118 TestFunctional/parallel/ServiceCmd/List 0.57
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.04
122 TestFunctional/parallel/ServiceCmd/Format 0.36
123 TestFunctional/parallel/ServiceCmd/URL 0.36
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.34
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.73
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.03
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.21
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/delete_addon-resizer_images 0.07
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.01
148 TestIngressAddonLegacy/StartLegacyK8sCluster 61.85
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.07
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.34
155 TestJSONOutput/start/Command 67.26
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.62
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.55
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 5.71
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.19
180 TestKicCustomNetwork/create_custom_network 28.23
181 TestKicCustomNetwork/use_default_bridge_network 23.66
182 TestKicExistingNetwork 23.45
183 TestKicCustomSubnet 24.14
184 TestKicStaticIP 27.57
185 TestMainNoArgs 0.04
186 TestMinikubeProfile 50.16
189 TestMountStart/serial/StartWithMountFirst 4.92
190 TestMountStart/serial/VerifyMountFirst 0.23
191 TestMountStart/serial/StartWithMountSecond 5.02
192 TestMountStart/serial/VerifyMountSecond 0.22
193 TestMountStart/serial/DeleteFirst 1.58
194 TestMountStart/serial/VerifyMountPostDelete 0.22
195 TestMountStart/serial/Stop 1.19
196 TestMountStart/serial/RestartStopped 6.83
197 TestMountStart/serial/VerifyMountPostStop 0.22
200 TestMultiNode/serial/FreshStart2Nodes 85.21
201 TestMultiNode/serial/DeployApp2Nodes 3.49
203 TestMultiNode/serial/AddNode 44.92
204 TestMultiNode/serial/ProfileList 0.25
205 TestMultiNode/serial/CopyFile 8.2
206 TestMultiNode/serial/StopNode 2.03
207 TestMultiNode/serial/StartAfterStop 10.76
208 TestMultiNode/serial/RestartKeepsNodes 113.85
209 TestMultiNode/serial/DeleteNode 4.58
210 TestMultiNode/serial/StopMultiNode 23.8
211 TestMultiNode/serial/RestartMultiNode 71.91
212 TestMultiNode/serial/ValidateNameConflict 25.64
219 TestScheduledStopUnix 100.26
222 TestInsufficientStorage 9.66
225 TestKubernetesUpgrade 353.12
226 TestMissingContainerUpgrade 136.59
228 TestStoppedBinaryUpgrade/Setup 0.48
229 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
230 TestNoKubernetes/serial/StartWithK8s 40.63
235 TestNoKubernetes/serial/StartWithStopK8s 6.46
240 TestNetworkPlugins/group/false 3.66
244 TestNoKubernetes/serial/Start 5.64
245 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
246 TestNoKubernetes/serial/ProfileList 1.09
247 TestNoKubernetes/serial/Stop 1.23
248 TestNoKubernetes/serial/StartNoArgs 6.71
249 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
258 TestPause/serial/Start 48.99
259 TestStoppedBinaryUpgrade/MinikubeLogs 0.57
260 TestPause/serial/SecondStartNoReconfiguration 43.97
261 TestPause/serial/Pause 0.66
262 TestPause/serial/VerifyStatus 0.27
263 TestPause/serial/Unpause 0.65
264 TestPause/serial/PauseAgain 0.78
265 TestPause/serial/DeletePaused 3.25
266 TestPause/serial/VerifyDeletedResources 0.77
267 TestNetworkPlugins/group/auto/Start 70.99
268 TestNetworkPlugins/group/flannel/Start 54.15
269 TestNetworkPlugins/group/auto/KubeletFlags 0.24
270 TestNetworkPlugins/group/auto/NetCatPod 9.31
271 TestNetworkPlugins/group/flannel/ControllerPod 5.02
272 TestNetworkPlugins/group/auto/DNS 0.16
273 TestNetworkPlugins/group/auto/Localhost 0.13
274 TestNetworkPlugins/group/auto/HairPin 0.14
275 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
276 TestNetworkPlugins/group/flannel/NetCatPod 9.3
277 TestNetworkPlugins/group/flannel/DNS 0.18
278 TestNetworkPlugins/group/flannel/Localhost 0.13
279 TestNetworkPlugins/group/flannel/HairPin 0.16
280 TestNetworkPlugins/group/enable-default-cni/Start 42.05
281 TestNetworkPlugins/group/bridge/Start 37.04
282 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
283 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
284 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
285 TestNetworkPlugins/group/bridge/NetCatPod 10.38
286 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
287 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
288 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
289 TestNetworkPlugins/group/bridge/DNS 0.16
290 TestNetworkPlugins/group/bridge/Localhost 0.15
291 TestNetworkPlugins/group/bridge/HairPin 0.16
292 TestNetworkPlugins/group/calico/Start 61.95
293 TestNetworkPlugins/group/kindnet/Start 69.93
294 TestNetworkPlugins/group/custom-flannel/Start 60.56
295 TestNetworkPlugins/group/calico/ControllerPod 5.02
296 TestNetworkPlugins/group/calico/KubeletFlags 0.25
297 TestNetworkPlugins/group/calico/NetCatPod 12.22
298 TestNetworkPlugins/group/calico/DNS 0.17
299 TestNetworkPlugins/group/calico/Localhost 0.17
300 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
301 TestNetworkPlugins/group/calico/HairPin 0.18
302 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
303 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.35
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
305 TestNetworkPlugins/group/kindnet/NetCatPod 10.3
306 TestNetworkPlugins/group/custom-flannel/DNS 0.18
307 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
308 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
310 TestStartStop/group/old-k8s-version/serial/FirstStart 129.36
311 TestNetworkPlugins/group/kindnet/DNS 0.21
312 TestNetworkPlugins/group/kindnet/Localhost 0.17
313 TestNetworkPlugins/group/kindnet/HairPin 0.19
315 TestStartStop/group/no-preload/serial/FirstStart 66.6
317 TestStartStop/group/embed-certs/serial/FirstStart 72.55
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.84
320 TestStartStop/group/no-preload/serial/DeployApp 7.4
321 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.73
322 TestStartStop/group/no-preload/serial/Stop 11.95
323 TestStartStop/group/embed-certs/serial/DeployApp 7.38
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
325 TestStartStop/group/no-preload/serial/SecondStart 590.62
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.43
328 TestStartStop/group/embed-certs/serial/Stop 11.98
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.8
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.93
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
332 TestStartStop/group/embed-certs/serial/SecondStart 340.67
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.15
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 336.11
335 TestStartStop/group/old-k8s-version/serial/DeployApp 8.42
336 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.63
337 TestStartStop/group/old-k8s-version/serial/Stop 12.03
338 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
339 TestStartStop/group/old-k8s-version/serial/SecondStart 66.11
340 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
341 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
342 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
343 TestStartStop/group/old-k8s-version/serial/Pause 2.72
345 TestStartStop/group/newest-cni/serial/FirstStart 36.04
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.74
348 TestStartStop/group/newest-cni/serial/Stop 1.21
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
350 TestStartStop/group/newest-cni/serial/SecondStart 25.73
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
354 TestStartStop/group/newest-cni/serial/Pause 2.47
355 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.02
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.06
357 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
358 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
359 TestStartStop/group/embed-certs/serial/Pause 2.59
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
361 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
362 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.55
363 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
364 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
365 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
366 TestStartStop/group/no-preload/serial/Pause 2.52
x
+
TestDownloadOnly/v1.16.0/json-events (6.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-937565 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-937565 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.889035704s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-937565
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-937565: exit status 85 (61.026323ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-937565 | jenkins | v1.30.1 | 31 May 23 18:43 UTC |          |
	|         | -p download-only-937565        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 18:43:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:43:32.810520   14244 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:43:32.810621   14244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:43:32.810629   14244 out.go:309] Setting ErrFile to fd 2...
	I0531 18:43:32.810633   14244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:43:32.810730   14244 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	W0531 18:43:32.810834   14244 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16569-7270/.minikube/config/config.json: open /home/jenkins/minikube-integration/16569-7270/.minikube/config/config.json: no such file or directory
	I0531 18:43:32.811356   14244 out.go:303] Setting JSON to true
	I0531 18:43:32.812163   14244 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1562,"bootTime":1685557051,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:43:32.812220   14244 start.go:137] virtualization: kvm guest
	I0531 18:43:32.815531   14244 out.go:97] [download-only-937565] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:43:32.817857   14244 out.go:169] MINIKUBE_LOCATION=16569
	W0531 18:43:32.815666   14244 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball: no such file or directory
	I0531 18:43:32.815730   14244 notify.go:220] Checking for updates...
	I0531 18:43:32.821935   14244 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:43:32.823811   14244 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 18:43:32.825689   14244 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	I0531 18:43:32.827426   14244 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0531 18:43:32.830511   14244 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0531 18:43:32.830707   14244 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 18:43:32.851541   14244 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:43:32.851611   14244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:43:33.179260   14244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-05-31 18:43:33.171268789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 18:43:33.179353   14244 docker.go:294] overlay module found
	I0531 18:43:33.181688   14244 out.go:97] Using the docker driver based on user configuration
	I0531 18:43:33.181707   14244 start.go:297] selected driver: docker
	I0531 18:43:33.181712   14244 start.go:875] validating driver "docker" against <nil>
	I0531 18:43:33.181784   14244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:43:33.233866   14244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-05-31 18:43:33.224609747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 18:43:33.234022   14244 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0531 18:43:33.234482   14244 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0531 18:43:33.234628   14244 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 18:43:33.237528   14244 out.go:169] Using Docker driver with root privileges
	I0531 18:43:33.239547   14244 cni.go:84] Creating CNI manager for ""
	I0531 18:43:33.239559   14244 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:43:33.239573   14244 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 18:43:33.239585   14244 start_flags.go:319] config:
	{Name:download-only-937565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-937565 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:43:33.241797   14244 out.go:97] Starting control plane node download-only-937565 in cluster download-only-937565
	I0531 18:43:33.241819   14244 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 18:43:33.243742   14244 out.go:97] Pulling base image ...
	I0531 18:43:33.243759   14244 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0531 18:43:33.243885   14244 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 18:43:33.258695   14244 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0531 18:43:33.258929   14244 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory
	I0531 18:43:33.259121   14244 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0531 18:43:33.274000   14244 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0531 18:43:33.274021   14244 cache.go:57] Caching tarball of preloaded images
	I0531 18:43:33.274142   14244 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0531 18:43:33.276661   14244 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0531 18:43:33.276682   14244 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0531 18:43:33.310315   14244 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0531 18:43:36.569140   14244 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-937565"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (6.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-937565 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-937565 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.520504772s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (6.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-937565
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-937565: exit status 85 (58.918373ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-937565 | jenkins | v1.30.1 | 31 May 23 18:43 UTC |          |
	|         | -p download-only-937565        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-937565 | jenkins | v1.30.1 | 31 May 23 18:43 UTC |          |
	|         | -p download-only-937565        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/31 18:43:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:43:39.763921   14391 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:43:39.764024   14391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:43:39.764032   14391 out.go:309] Setting ErrFile to fd 2...
	I0531 18:43:39.764036   14391 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:43:39.764134   14391 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	W0531 18:43:39.764250   14391 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16569-7270/.minikube/config/config.json: open /home/jenkins/minikube-integration/16569-7270/.minikube/config/config.json: no such file or directory
	I0531 18:43:39.764644   14391 out.go:303] Setting JSON to true
	I0531 18:43:39.765400   14391 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1569,"bootTime":1685557051,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:43:39.765462   14391 start.go:137] virtualization: kvm guest
	I0531 18:43:39.768358   14391 out.go:97] [download-only-937565] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:43:39.770143   14391 out.go:169] MINIKUBE_LOCATION=16569
	I0531 18:43:39.768535   14391 notify.go:220] Checking for updates...
	I0531 18:43:39.774188   14391 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:43:39.776372   14391 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 18:43:39.779592   14391 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	I0531 18:43:39.781654   14391 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0531 18:43:39.785327   14391 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0531 18:43:39.785687   14391 config.go:182] Loaded profile config "download-only-937565": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0531 18:43:39.785732   14391 start.go:783] api.Load failed for download-only-937565: filestore "download-only-937565": Docker machine "download-only-937565" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 18:43:39.785815   14391 driver.go:375] Setting default libvirt URI to qemu:///system
	W0531 18:43:39.785844   14391 start.go:783] api.Load failed for download-only-937565: filestore "download-only-937565": Docker machine "download-only-937565" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0531 18:43:39.805418   14391 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:43:39.805520   14391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:43:39.854169   14391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-05-31 18:43:39.846399825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 18:43:39.854272   14391 docker.go:294] overlay module found
	I0531 18:43:39.856861   14391 out.go:97] Using the docker driver based on existing profile
	I0531 18:43:39.856891   14391 start.go:297] selected driver: docker
	I0531 18:43:39.856896   14391 start.go:875] validating driver "docker" against &{Name:download-only-937565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-937565 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP:}
	I0531 18:43:39.857054   14391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:43:39.907459   14391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-05-31 18:43:39.89975583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 18:43:39.908015   14391 cni.go:84] Creating CNI manager for ""
	I0531 18:43:39.908033   14391 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0531 18:43:39.908040   14391 start_flags.go:319] config:
	{Name:download-only-937565 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-937565 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:43:39.910357   14391 out.go:97] Starting control plane node download-only-937565 in cluster download-only-937565
	I0531 18:43:39.910388   14391 cache.go:122] Beginning downloading kic base image for docker with crio
	I0531 18:43:39.912130   14391 out.go:97] Pulling base image ...
	I0531 18:43:39.912150   14391 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:43:39.912269   14391 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local docker daemon
	I0531 18:43:39.927048   14391 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4
	I0531 18:43:39.927077   14391 cache.go:57] Caching tarball of preloaded images
	I0531 18:43:39.927266   14391 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0531 18:43:39.928585   14391 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 to local cache
	I0531 18:43:39.928692   14391 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory
	I0531 18:43:39.928708   14391 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 in local cache directory, skipping pull
	I0531 18:43:39.928716   14391 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 exists in cache, skipping pull
	I0531 18:43:39.928730   14391 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 as a tarball
	I0531 18:43:39.929793   14391 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0531 18:43:39.929814   14391 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 ...
	I0531 18:43:39.962537   14391 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9aab8d7df6abf9830e86bd030b106830 -> /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4
	I0531 18:43:44.548118   14391 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 ...
	I0531 18:43:44.548205   14391 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16569-7270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-937565"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-937565
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.16s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-546935 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-546935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-546935
--- PASS: TestDownloadOnlyKic (1.16s)

                                                
                                    
x
+
TestBinaryMirror (0.7s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-791091 --alsologtostderr --binary-mirror http://127.0.0.1:41685 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-791091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-791091
--- PASS: TestBinaryMirror (0.70s)

                                                
                                    
x
+
TestOffline (67.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-272093 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-272093 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m4.883146183s)
helpers_test.go:175: Cleaning up "offline-crio-272093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-272093
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-272093: (2.575642456s)
--- PASS: TestOffline (67.46s)

                                                
                                    
x
+
TestAddons/Setup (122.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-133126 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-133126 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m2.024004536s)
--- PASS: TestAddons/Setup (122.02s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 14.987832ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-fffjl" [dba6dd7a-325e-4f40-ae58-f472a48ce54b] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015895569s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dvxpf" [e83e8386-6995-4287-9ba6-92204936ebea] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008072256s
addons_test.go:316: (dbg) Run:  kubectl --context addons-133126 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-133126 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-133126 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.733177279s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-133126 ip
2023/05/31 18:46:03 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-133126 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.62s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9bf5m" [a90bf4ce-bf41-4101-8e8e-d2a4f5e5f338] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006629704s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-133126
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-133126: (5.609013775s)
--- PASS: TestAddons/parallel/InspektorGadget (10.62s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.050182ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-jhps2" [9bdea332-ca3f-4804-ad93-d18dd0d6ad06] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0072796s
addons_test.go:391: (dbg) Run:  kubectl --context addons-133126 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-133126 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.45s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.19s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 13.666593ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-56gj2" [d921de98-18b8-4777-9691-8873f4f7dd02] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015837809s
addons_test.go:449: (dbg) Run:  kubectl --context addons-133126 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-133126 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.800406483s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-133126 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 11.285283ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-133126 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-133126 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8336c271-e711-4cec-a969-2a710e961254] Pending
helpers_test.go:344: "task-pv-pod" [8336c271-e711-4cec-a969-2a710e961254] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8336c271-e711-4cec-a969-2a710e961254] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.009666016s
addons_test.go:560: (dbg) Run:  kubectl --context addons-133126 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-133126 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-133126 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-133126 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-133126 delete pod task-pv-pod: (1.071478474s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-133126 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-133126 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133126 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-133126 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [dffcbb6d-8b20-4e0d-9aaa-69f214b821e2] Pending
helpers_test.go:344: "task-pv-pod-restore" [dffcbb6d-8b20-4e0d-9aaa-69f214b821e2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [dffcbb6d-8b20-4e0d-9aaa-69f214b821e2] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.009167243s
addons_test.go:602: (dbg) Run:  kubectl --context addons-133126 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-133126 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-133126 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-133126 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-133126 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.324812188s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-133126 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-133126 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-mvlwx" [7fad00c7-0b90-4314-973e-00bd7212f020] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-mvlwx" [7fad00c7-0b90-4314-973e-00bd7212f020] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.006620626s
--- PASS: TestAddons/parallel/Headlamp (10.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6964794569-zjbr7" [21e52b80-cbd6-4ff2-87d4-ba53399401ca] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009357836s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-133126
--- PASS: TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-133126 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-133126 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-133126
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-133126: (11.88324511s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-133126
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-133126
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-133126
--- PASS: TestAddons/StoppedEnableDisable (12.06s)

                                                
                                    
x
+
TestCertOptions (33.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-104708 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-104708 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.72528902s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-104708 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-104708 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-104708 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-104708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-104708
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-104708: (1.860933354s)
--- PASS: TestCertOptions (33.13s)

                                                
                                    
x
+
TestCertExpiration (233.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-214217 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-214217 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (25.006387582s)
E0531 19:15:42.682114   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-214217 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-214217 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.143302124s)
helpers_test.go:175: Cleaning up "cert-expiration-214217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-214217
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-214217: (2.249093061s)
--- PASS: TestCertExpiration (233.40s)

                                                
                                    
x
+
TestForceSystemdFlag (24.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-017904 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-017904 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.030874934s)
docker_test.go:126: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-017904 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-017904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-017904
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-017904: (2.370096385s)
--- PASS: TestForceSystemdFlag (24.67s)

                                                
                                    
x
+
TestForceSystemdEnv (38.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-368817 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-368817 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.620200772s)
helpers_test.go:175: Cleaning up "force-systemd-env-368817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-368817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-368817: (3.372724468s)
--- PASS: TestForceSystemdEnv (38.99s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.78s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.78s)

                                                
                                    
x
+
TestErrorSpam/setup (23.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-362671 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-362671 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-362671 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-362671 --driver=docker  --container-runtime=crio: (23.319320695s)
--- PASS: TestErrorSpam/setup (23.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 pause
--- PASS: TestErrorSpam/pause (1.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 stop: (1.187415019s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-362671 --log_dir /tmp/nospam-362671 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/16569-7270/.minikube/files/etc/test/nested/copy/14232/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.14s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744804 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0531 18:50:50.615730   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:50:50.621674   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:50:50.631926   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:50:50.652229   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:50:50.692508   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:50:50.772797   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:50:50.933177   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:50:51.253719   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
functional_test.go:2229: (dbg) Done: out/minikube-linux-amd64 start -p functional-744804 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.139069029s)
--- PASS: TestFunctional/serial/StartWithProxy (69.14s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744804 --alsologtostderr -v=8
E0531 18:50:51.893925   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:50:53.174971   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:50:55.735721   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:51:00.856421   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:51:11.097316   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 18:51:31.578089   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-744804 --alsologtostderr -v=8: (44.782666137s)
functional_test.go:658: soft start took 44.783318928s for "functional-744804" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-744804 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-744804 /tmp/TestFunctionalserialCacheCmdcacheadd_local3948965546/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 cache add minikube-local-cache-test:functional-744804
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 cache delete minikube-local-cache-test:functional-744804
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-744804
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744804 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (242.676461ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 cache reload
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 kubectl -- --context functional-744804 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-744804 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744804 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0531 18:52:12.539269   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-744804 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.792695743s)
functional_test.go:756: restart took 32.79279654s for "functional-744804" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-744804 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-744804 logs: (1.357458463s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 logs --file /tmp/TestFunctionalserialLogsFileCmd912997755/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-744804 logs --file /tmp/TestFunctionalserialLogsFileCmd912997755/001/logs.txt: (1.389272712s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744804 config get cpus: exit status 14 (85.173861ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744804 config get cpus: exit status 14 (42.985339ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-744804 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-744804 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 45241: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744804 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-744804 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (214.925405ms)

                                                
                                                
-- stdout --
	* [functional-744804] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:52:19.216855   44260 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:52:19.216974   44260 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:52:19.216989   44260 out.go:309] Setting ErrFile to fd 2...
	I0531 18:52:19.216996   44260 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:52:19.217131   44260 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	I0531 18:52:19.217710   44260 out.go:303] Setting JSON to false
	I0531 18:52:19.218691   44260 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2088,"bootTime":1685557051,"procs":277,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:52:19.218749   44260 start.go:137] virtualization: kvm guest
	I0531 18:52:19.221859   44260 out.go:177] * [functional-744804] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:52:19.223935   44260 notify.go:220] Checking for updates...
	I0531 18:52:19.223956   44260 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 18:52:19.226060   44260 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:52:19.227895   44260 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 18:52:19.237967   44260 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	I0531 18:52:19.240032   44260 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:52:19.242637   44260 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:52:19.244923   44260 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:52:19.245548   44260 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 18:52:19.286163   44260 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:52:19.286238   44260 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:52:19.345810   44260 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:47 SystemTime:2023-05-31 18:52:19.33458227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 18:52:19.345905   44260 docker.go:294] overlay module found
	I0531 18:52:19.348661   44260 out.go:177] * Using the docker driver based on existing profile
	I0531 18:52:19.351122   44260 start.go:297] selected driver: docker
	I0531 18:52:19.351142   44260 start.go:875] validating driver "docker" against &{Name:functional-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-744804 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:52:19.351257   44260 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:52:19.354284   44260 out.go:177] 
	W0531 18:52:19.356384   44260 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0531 18:52:19.358874   44260 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744804 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-744804 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-744804 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (181.280679ms)

                                                
                                                
-- stdout --
	* [functional-744804] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:52:19.031111   44158 out.go:296] Setting OutFile to fd 1 ...
	I0531 18:52:19.031287   44158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:52:19.031300   44158 out.go:309] Setting ErrFile to fd 2...
	I0531 18:52:19.031307   44158 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 18:52:19.031552   44158 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	I0531 18:52:19.032289   44158 out.go:303] Setting JSON to false
	I0531 18:52:19.033642   44158 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2088,"bootTime":1685557051,"procs":275,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:52:19.033727   44158 start.go:137] virtualization: kvm guest
	I0531 18:52:19.036753   44158 out.go:177] * [functional-744804] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	I0531 18:52:19.038882   44158 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 18:52:19.038930   44158 notify.go:220] Checking for updates...
	I0531 18:52:19.042302   44158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:52:19.044162   44158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 18:52:19.045927   44158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	I0531 18:52:19.047721   44158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:52:19.049652   44158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:52:19.052052   44158 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 18:52:19.052739   44158 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 18:52:19.077603   44158 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 18:52:19.077688   44158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 18:52:19.131190   44158 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:47 SystemTime:2023-05-31 18:52:19.12275723 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 18:52:19.131279   44158 docker.go:294] overlay module found
	I0531 18:52:19.135157   44158 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0531 18:52:19.137019   44158 start.go:297] selected driver: docker
	I0531 18:52:19.137032   44158 start.go:875] validating driver "docker" against &{Name:functional-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685034446-16582@sha256:aa728b22374c829d1e5b0a5d64d51d3e0ae0f2b191381d957516fdff68f357c8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-744804 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0531 18:52:19.137120   44158 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:52:19.139842   44158 out.go:177] 
	W0531 18:52:19.141649   44158 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0531 18:52:19.143580   44158 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-744804 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-744804 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-26p57" [33e17e0d-4692-4796-916a-48db354b983e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-26p57" [33e17e0d-4692-4796-916a-48db354b983e] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.007143438s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.49.2:31828
functional_test.go:1673: http://192.168.49.2:31828: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-26p57

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31828
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a6b19b8e-74cc-42cd-8dba-f45cf9043b55] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008972727s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-744804 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-744804 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-744804 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-744804 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cc0c25b4-3c0a-4315-a73f-bdbc26b18d2f] Pending
helpers_test.go:344: "sp-pod" [cc0c25b4-3c0a-4315-a73f-bdbc26b18d2f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cc0c25b4-3c0a-4315-a73f-bdbc26b18d2f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.007350396s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-744804 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-744804 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-744804 delete -f testdata/storage-provisioner/pod.yaml: (1.297207611s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-744804 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3e528068-d3da-42a4-ab99-8a5f8edef72c] Pending
helpers_test.go:344: "sp-pod" [3e528068-d3da-42a4-ab99-8a5f8edef72c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3e528068-d3da-42a4-ab99-8a5f8edef72c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.006850498s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-744804 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.56s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "cat /etc/hostname"
2023/05/31 18:52:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh -n functional-744804 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 cp functional-744804:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3557607702/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh -n functional-744804 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-744804 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-2zcg6" [489faf82-e4bd-4602-b4af-09a1d3276f8e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-2zcg6" [489faf82-e4bd-4602-b4af-09a1d3276f8e] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.009361472s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-744804 exec mysql-7db894d786-2zcg6 -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-744804 exec mysql-7db894d786-2zcg6 -- mysql -ppassword -e "show databases;": exit status 1 (123.958869ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-744804 exec mysql-7db894d786-2zcg6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.92s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/14232/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "sudo cat /etc/test/nested/copy/14232/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/14232.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "sudo cat /etc/ssl/certs/14232.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/14232.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "sudo cat /usr/share/ca-certificates/14232.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/142322.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "sudo cat /etc/ssl/certs/142322.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/142322.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "sudo cat /usr/share/ca-certificates/142322.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-744804 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "sudo systemctl is-active docker"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744804 ssh "sudo systemctl is-active docker": exit status 1 (283.701688ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "sudo systemctl is-active containerd"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744804 ssh "sudo systemctl is-active containerd": exit status 1 (297.431205ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-744804 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-744804 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-lql8g" [0ba574c0-d9da-4e47-85da-315c67765626] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-lql8g" [0ba574c0-d9da-4e47-85da-315c67765626] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.021206705s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-744804 /tmp/TestFunctionalparallelMountCmdany-port186596010/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1685559138138122583" to /tmp/TestFunctionalparallelMountCmdany-port186596010/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1685559138138122583" to /tmp/TestFunctionalparallelMountCmdany-port186596010/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1685559138138122583" to /tmp/TestFunctionalparallelMountCmdany-port186596010/001/test-1685559138138122583
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744804 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (298.933056ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 31 18:52 created-by-test
-rw-r--r-- 1 docker docker 24 May 31 18:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 31 18:52 test-1685559138138122583
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh cat /mount-9p/test-1685559138138122583
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-744804 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e8affe05-3225-4e2d-93b3-33c54fc0250a] Pending
helpers_test.go:344: "busybox-mount" [e8affe05-3225-4e2d-93b3-33c54fc0250a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e8affe05-3225-4e2d-93b3-33c54fc0250a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e8affe05-3225-4e2d-93b3-33c54fc0250a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007664138s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-744804 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744804 /tmp/TestFunctionalparallelMountCmdany-port186596010/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1313: Took "326.033732ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "49.454668ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1364: Took "271.334339ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "45.450768ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-744804 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-744804
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-744804 image ls --format short --alsologtostderr:
I0531 18:52:42.508968   50258 out.go:296] Setting OutFile to fd 1 ...
I0531 18:52:42.509083   50258 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:52:42.509091   50258 out.go:309] Setting ErrFile to fd 2...
I0531 18:52:42.509095   50258 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:52:42.509198   50258 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
I0531 18:52:42.509711   50258 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:52:42.509809   50258 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:52:42.510180   50258 cli_runner.go:164] Run: docker container inspect functional-744804 --format={{.State.Status}}
I0531 18:52:42.526430   50258 ssh_runner.go:195] Run: systemctl --version
I0531 18:52:42.526494   50258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744804
I0531 18:52:42.542416   50258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/functional-744804/id_rsa Username:docker}
I0531 18:52:42.624407   50258 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-744804 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer  | functional-744804  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| docker.io/library/nginx                 | alpine             | fe7edaf8a8dcf | 43.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-controller-manager | v1.27.2            | ac2b7465ebba9 | 114MB  |
| registry.k8s.io/kube-proxy              | v1.27.2            | b8aa50768fd67 | 72.7MB |
| registry.k8s.io/kube-scheduler          | v1.27.2            | 89e70da428d29 | 59.8MB |
| docker.io/library/nginx                 | latest             | f9c14fe76d502 | 147MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| registry.k8s.io/kube-apiserver          | v1.27.2            | c5b13e4f7806d | 122MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-744804 image ls --format table --alsologtostderr:
I0531 18:52:44.518363   50576 out.go:296] Setting OutFile to fd 1 ...
I0531 18:52:44.518619   50576 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:52:44.518630   50576 out.go:309] Setting ErrFile to fd 2...
I0531 18:52:44.518637   50576 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:52:44.518872   50576 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
I0531 18:52:44.520058   50576 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:52:44.520218   50576 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:52:44.520656   50576 cli_runner.go:164] Run: docker container inspect functional-744804 --format={{.State.Status}}
I0531 18:52:44.536319   50576 ssh_runner.go:195] Run: systemctl --version
I0531 18:52:44.536369   50576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744804
I0531 18:52:44.559240   50576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/functional-744804/id_rsa Username:docker}
I0531 18:52:44.746620   50576 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-744804 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"fe7edaf8a8dcf9af72f49cf0a0219e3ace17667bafc537f0d4a0ab1bd7f10467","repoDigests":["docker.io/library/nginx@sha256:0b0af14a00ea0e4fd9b09e77d2b89b71b5c5a97f9aa073553f355415bc34ae33","docker.io/library/nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90"],"repoTags":["docker.io/library/nginx:alpine"],"size":"43234868"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca
5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef62210
3e12","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:279461bc1c0b4753dc83677a927b9f7827012b3adbcaa5df9dfd4af8b0987bc6","registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"113906988"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91",
"repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-744804"],"size":"34114467"},{"id":"c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370","repoDigests":["registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9","registry.k8s.io/kube-apiserver@sha256:95388fe585f1d6f65d414678042a281f80593e78cabaeeb8520a0873ebbb54f2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"122053574"},{"id":"b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee","repoDigests":["registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f","registry.k8s.io/kube-proxy@sha256:931b8fa2393b7e2a926afbfd24784153760b999ddbf2059f2cb652510ecdef83"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"72709527"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5a
a82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177","registry.k8s.io/kube-scheduler@sha256:f8be7505892d1671a15afa3ac6c3b31e50da87dd59a4745e30a5b3f9f584ee6e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"59802924"},{"id":"f9c14fe76d502861ba0939bc3189e642c02e257f06f4c0214b1f8ca329326cda","repoDigests":["docker.io/library/nginx@sha256:6b0696
4cdbbc517102ce5e0cef95152f3c6a7ef703e4057cb574539de91f72e6","docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305"],"repoTags":["docker.io/library/nginx:latest"],"size":"146967160"}]
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-744804 image ls --format json --alsologtostderr:
I0531 18:52:44.111532   50530 out.go:296] Setting OutFile to fd 1 ...
I0531 18:52:44.111633   50530 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:52:44.111642   50530 out.go:309] Setting ErrFile to fd 2...
I0531 18:52:44.111646   50530 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:52:44.111761   50530 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
I0531 18:52:44.112283   50530 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:52:44.112405   50530 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:52:44.112783   50530 cli_runner.go:164] Run: docker container inspect functional-744804 --format={{.State.Status}}
I0531 18:52:44.128546   50530 ssh_runner.go:195] Run: systemctl --version
I0531 18:52:44.128584   50530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744804
I0531 18:52:44.154003   50530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/functional-744804/id_rsa Username:docker}
I0531 18:52:44.346630   50530 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-744804 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-744804
size: "34114467"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:279461bc1c0b4753dc83677a927b9f7827012b3adbcaa5df9dfd4af8b0987bc6
- registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "113906988"
- id: 89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177
- registry.k8s.io/kube-scheduler@sha256:f8be7505892d1671a15afa3ac6c3b31e50da87dd59a4745e30a5b3f9f584ee6e
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "59802924"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: fe7edaf8a8dcf9af72f49cf0a0219e3ace17667bafc537f0d4a0ab1bd7f10467
repoDigests:
- docker.io/library/nginx@sha256:0b0af14a00ea0e4fd9b09e77d2b89b71b5c5a97f9aa073553f355415bc34ae33
- docker.io/library/nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90
repoTags:
- docker.io/library/nginx:alpine
size: "43234868"
- id: f9c14fe76d502861ba0939bc3189e642c02e257f06f4c0214b1f8ca329326cda
repoDigests:
- docker.io/library/nginx@sha256:6b06964cdbbc517102ce5e0cef95152f3c6a7ef703e4057cb574539de91f72e6
- docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305
repoTags:
- docker.io/library/nginx:latest
size: "146967160"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9
- registry.k8s.io/kube-apiserver@sha256:95388fe585f1d6f65d414678042a281f80593e78cabaeeb8520a0873ebbb54f2
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "122053574"
- id: b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f
- registry.k8s.io/kube-proxy@sha256:931b8fa2393b7e2a926afbfd24784153760b999ddbf2059f2cb652510ecdef83
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "72709527"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-744804 image ls --format yaml --alsologtostderr:
I0531 18:52:42.698530   50302 out.go:296] Setting OutFile to fd 1 ...
I0531 18:52:42.698663   50302 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:52:42.698670   50302 out.go:309] Setting ErrFile to fd 2...
I0531 18:52:42.698674   50302 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 18:52:42.698799   50302 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
I0531 18:52:42.699320   50302 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:52:42.699409   50302 config.go:182] Loaded profile config "functional-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0531 18:52:42.699778   50302 cli_runner.go:164] Run: docker container inspect functional-744804 --format={{.State.Status}}
I0531 18:52:42.715824   50302 ssh_runner.go:195] Run: systemctl --version
I0531 18:52:42.715882   50302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-744804
I0531 18:52:42.732425   50302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/functional-744804/id_rsa Username:docker}
I0531 18:52:42.812544   50302 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-744804
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image load --daemon gcr.io/google-containers/addon-resizer:functional-744804 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-linux-amd64 -p functional-744804 image load --daemon gcr.io/google-containers/addon-resizer:functional-744804 --alsologtostderr: (4.025837672s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image load --daemon gcr.io/google-containers/addon-resizer:functional-744804 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p functional-744804 image load --daemon gcr.io/google-containers/addon-resizer:functional-744804 --alsologtostderr: (3.666333609s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-744804 /tmp/TestFunctionalparallelMountCmdspecific-port3197892450/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744804 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (303.306045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744804 /tmp/TestFunctionalparallelMountCmdspecific-port3197892450/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744804 ssh "sudo umount -f /mount-9p": exit status 1 (249.967371ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-744804 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744804 /tmp/TestFunctionalparallelMountCmdspecific-port3197892450/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-744804 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2242801906/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-744804 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2242801906/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-744804 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2242801906/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-744804 ssh "findmnt -T" /mount1: exit status 1 (333.181465ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-744804 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744804 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2242801906/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744804 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2242801906/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-744804 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2242801906/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 service list -o json
functional_test.go:1492: Took "549.573851ms" to run "out/minikube-linux-amd64 -p functional-744804 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.49.2:32576
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-744804
functional_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image load --daemon gcr.io/google-containers/addon-resizer:functional-744804 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p functional-744804 image load --daemon gcr.io/google-containers/addon-resizer:functional-744804 --alsologtostderr: (4.957012106s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.49.2:32576
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-744804 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-744804 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-744804 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 48304: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-744804 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-744804 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-744804 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a4f15ec7-2019-42b8-b681-69172ca31630] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a4f15ec7-2019-42b8-b681-69172ca31630] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.034793253s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image save gcr.io/google-containers/addon-resizer:functional-744804 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image rm gcr.io/google-containers/addon-resizer:functional-744804 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-744804
functional_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p functional-744804 image save --daemon gcr.io/google-containers/addon-resizer:functional-744804 --alsologtostderr
functional_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p functional-744804 image save --daemon gcr.io/google-containers/addon-resizer:functional-744804 --alsologtostderr: (3.172432274s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-744804
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-744804 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.211.207 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-744804 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-744804
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-744804
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-744804
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (61.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-466444 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0531 18:53:34.459532   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-466444 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m1.849723604s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (61.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.07s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466444 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-466444 addons enable ingress --alsologtostderr -v=5: (11.071315216s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.07s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466444 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.34s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-027984 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0531 18:57:28.204894   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 18:57:38.445203   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 18:57:58.926368   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-027984 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m7.258505117s)
--- PASS: TestJSONOutput/start/Command (67.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-027984 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-027984 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-027984 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-027984 --output=json --user=testUser: (5.706241796s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-826427 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-826427 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.298975ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d499ba2-d670-45f6-b1cd-bd630ce47c69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-826427] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"87b33347-83ef-44f2-a340-ee4e2daf1f7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16569"}}
	{"specversion":"1.0","id":"46917d43-ff26-411b-8085-6c3b53cf0645","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ebb47356-d829-4c8c-bdc5-99f7136bcfd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig"}}
	{"specversion":"1.0","id":"be26b2d3-1434-47c4-b19d-6f0c8724d2a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube"}}
	{"specversion":"1.0","id":"bbc89114-9a53-4c56-b289-76aaa5cac506","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"276bcc06-598d-49c6-8f90-5f7ac79e8b61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"85a867b7-1e43-49ed-85cb-4230b2b93cef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-826427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-826427
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-902591 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-902591 --network=: (26.210345738s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-902591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-902591
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-902591: (2.006252751s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.23s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-109712 --network=bridge
E0531 18:59:19.635146   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 18:59:19.640395   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 18:59:19.650664   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 18:59:19.671124   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 18:59:19.712249   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 18:59:19.792557   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 18:59:19.952969   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 18:59:20.273461   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 18:59:20.914352   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 18:59:22.194839   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 18:59:24.755487   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 18:59:29.876322   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-109712 --network=bridge: (21.732609982s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-109712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-109712
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-109712: (1.907479348s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.66s)

                                                
                                    
x
+
TestKicExistingNetwork (23.45s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-568589 --network=existing-network
E0531 18:59:40.117118   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-568589 --network=existing-network: (21.461747138s)
helpers_test.go:175: Cleaning up "existing-network-568589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-568589
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-568589: (1.856717894s)
--- PASS: TestKicExistingNetwork (23.45s)

                                                
                                    
x
+
TestKicCustomSubnet (24.14s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-569941 --subnet=192.168.60.0/24
E0531 19:00:00.598129   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
E0531 19:00:01.807711   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-569941 --subnet=192.168.60.0/24: (22.138129129s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-569941 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-569941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-569941
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-569941: (1.987290833s)
--- PASS: TestKicCustomSubnet (24.14s)

                                                
                                    
x
+
TestKicStaticIP (27.57s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-744027 --static-ip=192.168.200.200
E0531 19:00:41.559150   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-744027 --static-ip=192.168.200.200: (25.374860007s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-744027 ip
helpers_test.go:175: Cleaning up "static-ip-744027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-744027
E0531 19:00:50.615337   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-744027: (2.080664986s)
--- PASS: TestKicStaticIP (27.57s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (50.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-360436 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-360436 --driver=docker  --container-runtime=crio: (21.30315704s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-363472 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-363472 --driver=docker  --container-runtime=crio: (24.021369552s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-360436
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-363472
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-363472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-363472
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-363472: (1.805047215s)
helpers_test.go:175: Cleaning up "first-360436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-360436
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-360436: (2.122153745s)
--- PASS: TestMinikubeProfile (50.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-404412 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-404412 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.919015458s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-404412 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-419339 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-419339 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.02204537s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-419339 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-404412 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-404412 --alsologtostderr -v=5: (1.581950921s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-419339 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-419339
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-419339: (1.187950719s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.83s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-419339
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-419339: (5.834411155s)
--- PASS: TestMountStart/serial/RestartStopped (6.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-419339 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-697136 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0531 19:02:17.963552   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 19:02:45.648526   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-697136 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m24.801166702s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-697136 -- rollout status deployment/busybox: (1.782705044s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-jsm9c -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-rvdrs -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-jsm9c -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-rvdrs -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-jsm9c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-697136 -- exec busybox-67b7f59bb-rvdrs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.49s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-697136 -v 3 --alsologtostderr
E0531 19:04:19.635055   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-697136 -v 3 --alsologtostderr: (44.379341257s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.92s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp testdata/cp-test.txt multinode-697136:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp multinode-697136:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4081500966/001/cp-test_multinode-697136.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp multinode-697136:/home/docker/cp-test.txt multinode-697136-m02:/home/docker/cp-test_multinode-697136_multinode-697136-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m02 "sudo cat /home/docker/cp-test_multinode-697136_multinode-697136-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp multinode-697136:/home/docker/cp-test.txt multinode-697136-m03:/home/docker/cp-test_multinode-697136_multinode-697136-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m03 "sudo cat /home/docker/cp-test_multinode-697136_multinode-697136-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp testdata/cp-test.txt multinode-697136-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp multinode-697136-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4081500966/001/cp-test_multinode-697136-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp multinode-697136-m02:/home/docker/cp-test.txt multinode-697136:/home/docker/cp-test_multinode-697136-m02_multinode-697136.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136 "sudo cat /home/docker/cp-test_multinode-697136-m02_multinode-697136.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp multinode-697136-m02:/home/docker/cp-test.txt multinode-697136-m03:/home/docker/cp-test_multinode-697136-m02_multinode-697136-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m03 "sudo cat /home/docker/cp-test_multinode-697136-m02_multinode-697136-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp testdata/cp-test.txt multinode-697136-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp multinode-697136-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4081500966/001/cp-test_multinode-697136-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp multinode-697136-m03:/home/docker/cp-test.txt multinode-697136:/home/docker/cp-test_multinode-697136-m03_multinode-697136.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136 "sudo cat /home/docker/cp-test_multinode-697136-m03_multinode-697136.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 cp multinode-697136-m03:/home/docker/cp-test.txt multinode-697136-m02:/home/docker/cp-test_multinode-697136-m03_multinode-697136-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 ssh -n multinode-697136-m02 "sudo cat /home/docker/cp-test_multinode-697136-m03_multinode-697136-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-697136 node stop m03: (1.185860166s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-697136 status: exit status 7 (419.551395ms)

                                                
                                                
-- stdout --
	multinode-697136
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-697136-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-697136-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-697136 status --alsologtostderr: exit status 7 (420.875595ms)

                                                
                                                
-- stdout --
	multinode-697136
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-697136-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-697136-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:04:31.017911  110301 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:04:31.018040  110301 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:04:31.018050  110301 out.go:309] Setting ErrFile to fd 2...
	I0531 19:04:31.018056  110301 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:04:31.018183  110301 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	I0531 19:04:31.018378  110301 out.go:303] Setting JSON to false
	I0531 19:04:31.018408  110301 mustload.go:65] Loading cluster: multinode-697136
	I0531 19:04:31.018501  110301 notify.go:220] Checking for updates...
	I0531 19:04:31.018848  110301 config.go:182] Loaded profile config "multinode-697136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:04:31.018870  110301 status.go:255] checking status of multinode-697136 ...
	I0531 19:04:31.019345  110301 cli_runner.go:164] Run: docker container inspect multinode-697136 --format={{.State.Status}}
	I0531 19:04:31.040044  110301 status.go:330] multinode-697136 host status = "Running" (err=<nil>)
	I0531 19:04:31.040078  110301 host.go:66] Checking if "multinode-697136" exists ...
	I0531 19:04:31.040404  110301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-697136
	I0531 19:04:31.056723  110301 host.go:66] Checking if "multinode-697136" exists ...
	I0531 19:04:31.057093  110301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:04:31.057168  110301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136
	I0531 19:04:31.073311  110301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136/id_rsa Username:docker}
	I0531 19:04:31.153236  110301 ssh_runner.go:195] Run: systemctl --version
	I0531 19:04:31.156972  110301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:04:31.166751  110301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:04:31.211569  110301 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:56 SystemTime:2023-05-31 19:04:31.203570319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 19:04:31.212113  110301 kubeconfig.go:92] found "multinode-697136" server: "https://192.168.58.2:8443"
	I0531 19:04:31.212134  110301 api_server.go:166] Checking apiserver status ...
	I0531 19:04:31.212166  110301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:04:31.222010  110301 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1443/cgroup
	I0531 19:04:31.230226  110301 api_server.go:182] apiserver freezer: "7:freezer:/docker/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe/crio/crio-e95e759b28daefcb2a32d79f11150e23e9f6f926263234c3defb955296a19e9a"
	I0531 19:04:31.230287  110301 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/319c771e8fa60107659044e74eb02b6f2cbd8b8fc7dd2f7bc5b2a5cc3158a7fe/crio/crio-e95e759b28daefcb2a32d79f11150e23e9f6f926263234c3defb955296a19e9a/freezer.state
	I0531 19:04:31.237873  110301 api_server.go:204] freezer state: "THAWED"
	I0531 19:04:31.237900  110301 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0531 19:04:31.242527  110301 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0531 19:04:31.242551  110301 status.go:421] multinode-697136 apiserver status = Running (err=<nil>)
	I0531 19:04:31.242576  110301 status.go:257] multinode-697136 status: &{Name:multinode-697136 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:04:31.242594  110301 status.go:255] checking status of multinode-697136-m02 ...
	I0531 19:04:31.242851  110301 cli_runner.go:164] Run: docker container inspect multinode-697136-m02 --format={{.State.Status}}
	I0531 19:04:31.259428  110301 status.go:330] multinode-697136-m02 host status = "Running" (err=<nil>)
	I0531 19:04:31.259454  110301 host.go:66] Checking if "multinode-697136-m02" exists ...
	I0531 19:04:31.259737  110301 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-697136-m02
	I0531 19:04:31.274679  110301 host.go:66] Checking if "multinode-697136-m02" exists ...
	I0531 19:04:31.274963  110301 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:04:31.275005  110301 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-697136-m02
	I0531 19:04:31.290581  110301 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16569-7270/.minikube/machines/multinode-697136-m02/id_rsa Username:docker}
	I0531 19:04:31.372973  110301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:04:31.383950  110301 status.go:257] multinode-697136-m02 status: &{Name:multinode-697136-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:04:31.383978  110301 status.go:255] checking status of multinode-697136-m03 ...
	I0531 19:04:31.384225  110301 cli_runner.go:164] Run: docker container inspect multinode-697136-m03 --format={{.State.Status}}
	I0531 19:04:31.399872  110301 status.go:330] multinode-697136-m03 host status = "Stopped" (err=<nil>)
	I0531 19:04:31.399890  110301 status.go:343] host is not running, skipping remaining checks
	I0531 19:04:31.399895  110301 status.go:257] multinode-697136-m03 status: &{Name:multinode-697136-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.03s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-697136 node start m03 --alsologtostderr: (10.139160647s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (113.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-697136
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-697136
E0531 19:04:47.321527   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-697136: (24.759850968s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-697136 --wait=true -v=8 --alsologtostderr
E0531 19:05:50.615614   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-697136 --wait=true -v=8 --alsologtostderr: (1m29.007078344s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-697136
--- PASS: TestMultiNode/serial/RestartKeepsNodes (113.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-697136 node delete m03: (4.037826269s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-697136 stop: (23.650645284s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-697136 status: exit status 7 (74.401338ms)

                                                
                                                
-- stdout --
	multinode-697136
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-697136-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-697136 status --alsologtostderr: exit status 7 (74.857361ms)

                                                
                                                
-- stdout --
	multinode-697136
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-697136-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:07:04.350145  120297 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:07:04.350264  120297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:07:04.350273  120297 out.go:309] Setting ErrFile to fd 2...
	I0531 19:07:04.350278  120297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:07:04.350411  120297 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	I0531 19:07:04.350575  120297 out.go:303] Setting JSON to false
	I0531 19:07:04.350597  120297 mustload.go:65] Loading cluster: multinode-697136
	I0531 19:07:04.350637  120297 notify.go:220] Checking for updates...
	I0531 19:07:04.351089  120297 config.go:182] Loaded profile config "multinode-697136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:07:04.351108  120297 status.go:255] checking status of multinode-697136 ...
	I0531 19:07:04.351545  120297 cli_runner.go:164] Run: docker container inspect multinode-697136 --format={{.State.Status}}
	I0531 19:07:04.370514  120297 status.go:330] multinode-697136 host status = "Stopped" (err=<nil>)
	I0531 19:07:04.370534  120297 status.go:343] host is not running, skipping remaining checks
	I0531 19:07:04.370542  120297 status.go:257] multinode-697136 status: &{Name:multinode-697136 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:07:04.370568  120297 status.go:255] checking status of multinode-697136-m02 ...
	I0531 19:07:04.370791  120297 cli_runner.go:164] Run: docker container inspect multinode-697136-m02 --format={{.State.Status}}
	I0531 19:07:04.386350  120297 status.go:330] multinode-697136-m02 host status = "Stopped" (err=<nil>)
	I0531 19:07:04.386376  120297 status.go:343] host is not running, skipping remaining checks
	I0531 19:07:04.386387  120297 status.go:257] multinode-697136-m02 status: &{Name:multinode-697136-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (71.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-697136 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0531 19:07:13.660785   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 19:07:17.963521   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-697136 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m11.354520611s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-697136 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (71.91s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-697136
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-697136-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-697136-m02 --driver=docker  --container-runtime=crio: exit status 14 (63.96522ms)

                                                
                                                
-- stdout --
	* [multinode-697136-m02] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-697136-m02' is duplicated with machine name 'multinode-697136-m02' in profile 'multinode-697136'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-697136-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-697136-m03 --driver=docker  --container-runtime=crio: (23.481199367s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-697136
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-697136: exit status 80 (250.811802ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-697136
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-697136-m03 already exists in multinode-697136-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-697136-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-697136-m03: (1.808767917s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.64s)

                                                
                                    
x
+
TestScheduledStopUnix (100.26s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-661195 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-661195 --memory=2048 --driver=docker  --container-runtime=crio: (24.153613411s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-661195 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-661195 -n scheduled-stop-661195
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-661195 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-661195 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-661195 -n scheduled-stop-661195
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-661195
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-661195 --schedule 15s
E0531 19:12:17.966182   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-661195
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-661195: exit status 7 (57.171755ms)

                                                
                                                
-- stdout --
	scheduled-stop-661195
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-661195 -n scheduled-stop-661195
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-661195 -n scheduled-stop-661195: exit status 7 (58.049191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-661195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-661195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-661195: (4.91646388s)
--- PASS: TestScheduledStopUnix (100.26s)

                                                
                                    
x
+
TestInsufficientStorage (9.66s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-103892 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-103892 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.422588809s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e1e9315b-4685-40e4-aa9e-6396f048d062","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-103892] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"28b5aa16-1f79-4266-ac08-b9e00d0b64e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16569"}}
	{"specversion":"1.0","id":"1f2ef390-9fef-42e8-9e71-61f5d2bc2326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fc92c1a9-cd30-468f-813a-1f44b9562621","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig"}}
	{"specversion":"1.0","id":"31b5bc1a-9092-4994-9505-c29eeb00b3de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube"}}
	{"specversion":"1.0","id":"8754c15c-cb02-460d-bcb2-3ca2b418b86f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0ed88eb9-ff68-4d18-b83d-2521fdaa8e79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"16da0f8a-6ce6-48b2-9b1e-342b9b6332ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"096575a9-1935-4254-a9b9-8f0897914129","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"990e44e1-bffe-49af-86cb-acf50aff43c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4912e166-cc42-4f74-9bf1-05d957631550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b74f6901-6d74-4d90-8268-a0235c7af84d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-103892 in cluster insufficient-storage-103892","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cbab29e2-64b0-4ac5-8cc7-0b0ea00ddffc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"60d01d12-97fb-476f-8b59-64dc93257a7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae3930ee-e168-4f78-9728-f3da17584cef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-103892 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-103892 --output=json --layout=cluster: exit status 7 (234.814694ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-103892","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-103892","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:13:04.504028  143014 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-103892" does not appear in /home/jenkins/minikube-integration/16569-7270/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-103892 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-103892 --output=json --layout=cluster: exit status 7 (232.113874ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-103892","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-103892","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:13:04.737232  143101 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-103892" does not appear in /home/jenkins/minikube-integration/16569-7270/kubeconfig
	E0531 19:13:04.746329  143101 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/insufficient-storage-103892/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-103892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-103892
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-103892: (1.770979015s)
--- PASS: TestInsufficientStorage (9.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (353.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-810976 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-810976 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.96446176s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-810976
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-810976: (1.309529977s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-810976 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-810976 status --format={{.Host}}: exit status 7 (65.36092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-810976 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-810976 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.809907526s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-810976 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-810976 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-810976 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (74.665301ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-810976] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-810976
	    minikube start -p kubernetes-upgrade-810976 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8109762 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.2, by running:
	    
	    minikube start -p kubernetes-upgrade-810976 --kubernetes-version=v1.27.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-810976 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-810976 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.525338627s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-810976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-810976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-810976: (2.315445922s)
--- PASS: TestKubernetesUpgrade (353.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (136.59s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.2132410231.exe start -p missing-upgrade-631135 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.9.1.2132410231.exe start -p missing-upgrade-631135 --memory=2200 --driver=docker  --container-runtime=crio: (1m6.674877439s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-631135
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-631135: (2.996535517s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-631135
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-631135 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-631135 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.294134364s)
helpers_test.go:175: Cleaning up "missing-upgrade-631135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-631135
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-631135: (2.107043091s)
--- PASS: TestMissingContainerUpgrade (136.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-338730 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-338730 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (80.064241ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-338730] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-338730 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-338730 --driver=docker  --container-runtime=crio: (40.299887246s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-338730 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-338730 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-338730 --no-kubernetes --driver=docker  --container-runtime=crio: (4.221927956s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-338730 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-338730 status -o json: exit status 2 (277.589233ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-338730","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-338730
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-338730: (1.963746456s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p false-407111 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-407111 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (176.954205ms)

                                                
                                                
-- stdout --
	* [false-407111] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16569
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:13:49.554135  154907 out.go:296] Setting OutFile to fd 1 ...
	I0531 19:13:49.554334  154907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:13:49.554362  154907 out.go:309] Setting ErrFile to fd 2...
	I0531 19:13:49.554381  154907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0531 19:13:49.554520  154907 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16569-7270/.minikube/bin
	I0531 19:13:49.555138  154907 out.go:303] Setting JSON to false
	I0531 19:13:49.556880  154907 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3379,"bootTime":1685557051,"procs":793,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 19:13:49.556995  154907 start.go:137] virtualization: kvm guest
	I0531 19:13:49.560144  154907 out.go:177] * [false-407111] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 19:13:49.562446  154907 out.go:177]   - MINIKUBE_LOCATION=16569
	I0531 19:13:49.562409  154907 notify.go:220] Checking for updates...
	I0531 19:13:49.565070  154907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:13:49.567103  154907 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16569-7270/kubeconfig
	I0531 19:13:49.569451  154907 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16569-7270/.minikube
	I0531 19:13:49.571217  154907 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 19:13:49.576019  154907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:13:49.582742  154907 config.go:182] Loaded profile config "NoKubernetes-338730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0531 19:13:49.582959  154907 config.go:182] Loaded profile config "offline-crio-272093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0531 19:13:49.583086  154907 config.go:182] Loaded profile config "stopped-upgrade-360822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0531 19:13:49.583222  154907 driver.go:375] Setting default libvirt URI to qemu:///system
	I0531 19:13:49.609662  154907 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0531 19:13:49.609738  154907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0531 19:13:49.668436  154907 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:93 SystemTime:2023-05-31 19:13:49.659925279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0531 19:13:49.668550  154907 docker.go:294] overlay module found
	I0531 19:13:49.671046  154907 out.go:177] * Using the docker driver based on user configuration
	I0531 19:13:49.673010  154907 start.go:297] selected driver: docker
	I0531 19:13:49.673025  154907 start.go:875] validating driver "docker" against <nil>
	I0531 19:13:49.673043  154907 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:13:49.675551  154907 out.go:177] 
	W0531 19:13:49.677276  154907 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0531 19:13:49.679006  154907 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:86: 
----------------------- debugLogs start: false-407111 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-407111" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-407111" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 31 May 2023 19:13:45 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-338730
contexts:
- context:
cluster: NoKubernetes-338730
extensions:
- extension:
last-update: Wed, 31 May 2023 19:13:45 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: context_info
namespace: default
user: NoKubernetes-338730
name: NoKubernetes-338730
current-context: NoKubernetes-338730
kind: Config
preferences: {}
users:
- name: NoKubernetes-338730
user:
client-certificate: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/NoKubernetes-338730/client.crt
client-key: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/NoKubernetes-338730/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-407111

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-407111"

                                                
                                                
----------------------- debugLogs end: false-407111 [took: 3.32514221s] --------------------------------
helpers_test.go:175: Cleaning up "false-407111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-407111
--- PASS: TestNetworkPlugins/group/false (3.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-338730 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-338730 --no-kubernetes --driver=docker  --container-runtime=crio: (5.636597487s)
--- PASS: TestNoKubernetes/serial/Start (5.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-338730 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-338730 "sudo systemctl is-active --quiet service kubelet": exit status 1 (256.226237ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-338730
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-338730: (1.226257419s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-338730 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-338730 --driver=docker  --container-runtime=crio: (6.706997618s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-338730 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-338730 "sudo systemctl is-active --quiet service kubelet": exit status 1 (361.044346ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestPause/serial/Start (48.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-349180 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0531 19:14:19.635200   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-349180 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (48.987485456s)
--- PASS: TestPause/serial/Start (48.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-360822
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-349180 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-349180 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.94424079s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.97s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-349180 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-349180 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-349180 --output=json --layout=cluster: exit status 2 (272.114476ms)

                                                
                                                
-- stdout --
	{"Name":"pause-349180","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-349180","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-349180 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-349180 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-349180 --alsologtostderr -v=5
E0531 19:15:50.614894   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-349180 --alsologtostderr -v=5: (3.248193892s)
--- PASS: TestPause/serial/DeletePaused (3.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.77s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-349180
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-349180: exit status 1 (39.367ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-349180: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m10.993766003s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0531 19:17:17.964237   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.146974363s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-407111 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-407111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-n8xzq" [ada5a3d7-72bd-43ae-aec8-d9774917d3a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-n8xzq" [ada5a3d7-72bd-43ae-aec8-d9774917d3a7] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.012939314s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-g2mnq" [b29a6dc3-794b-4875-a404-3c72ae7ee4bb] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.0131924s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-407111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-407111 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-407111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-lgbxz" [133edb62-1d3b-45cd-810a-bc50c6397ed0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-lgbxz" [133edb62-1d3b-45cd-810a-bc50c6397ed0] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.006831984s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-407111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (42.049282489s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (37.040684482s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-407111 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-407111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-v74tt" [42d1f18c-d630-4500-9142-9d7a20bfb558] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-v74tt" [42d1f18c-d630-4500-9142-9d7a20bfb558] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.007054387s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-407111 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-407111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hln4c" [39790460-7ad1-401f-b223-836ae9c3a28c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-hln4c" [39790460-7ad1-401f-b223-836ae9c3a28c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.103682018s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-407111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-407111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m1.949527713s)
--- PASS: TestNetworkPlugins/group/calico/Start (61.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0531 19:19:19.635187   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m9.926784753s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-407111 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.564620817s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-z9zk2" [531f603b-d855-4508-b6a1-42bfa859b965] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.017959166s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-407111 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-407111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:148: (dbg) Done: kubectl --context calico-407111 replace --force -f testdata/netcat-deployment.yaml: (1.049330339s)
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-g6hjc" [ddb88d67-f85b-4b2d-a486-d0a3bfa092eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-g6hjc" [ddb88d67-f85b-4b2d-a486-d0a3bfa092eb] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006637107s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-407111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-407111 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hb7fv" [854859f9-5caa-4e79-8042-78e8120be8ee] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.016530314s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-407111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-fkb2p" [1b29dadb-2ac3-4234-9dd7-5da43bc3c548] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-fkb2p" [1b29dadb-2ac3-4234-9dd7-5da43bc3c548] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.006907103s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-407111 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-407111 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-qjnj4" [de34177c-006b-4d15-a376-29a96c316052] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-qjnj4" [de34177c-006b-4d15-a376-29a96c316052] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.009757099s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-407111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (129.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-630053 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-630053 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m9.361918837s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (129.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-407111 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-407111 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)
E0531 19:25:49.080890   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:49.567909   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:50.615564   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 19:25:51.710229   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:26:09.561522   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:26:10.048921   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:26:32.623290   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:26:32.670645   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:26:41.200466   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:26:50.522048   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:26:51.010062   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:27:17.964379   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
E0531 19:27:39.308215   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:27:45.495146   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:27:52.653134   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:27:52.658415   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:27:52.668712   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:27:52.689005   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:27:52.729776   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:27:52.810093   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:27:52.970468   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:27:53.291451   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:27:53.931666   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:27:54.591810   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:27:55.212346   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:27:57.772547   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:28:02.893230   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:28:06.991124   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:28:12.442873   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:28:12.930988   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:28:13.134103   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
E0531 19:28:13.178853   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-791589 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-791589 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m6.597645088s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-196700 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-196700 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m12.545547211s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-912402 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-912402 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m13.84333628s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-791589 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6d513ed5-ca47-4462-b45e-2a2e14960eec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6d513ed5-ca47-4462-b45e-2a2e14960eec] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.013991753s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-791589 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-791589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-791589 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-791589 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-791589 --alsologtostderr -v=3: (11.948375477s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-196700 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d2c0d8e1-182e-4236-9f89-1fdcc7ee439a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d2c0d8e1-182e-4236-9f89-1fdcc7ee439a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.01237482s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-196700 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-791589 -n no-preload-791589
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-791589 -n no-preload-791589: exit status 7 (80.936493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-791589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (590.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-791589 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:22:17.963815   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-791589 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (9m50.352588239s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-791589 -n no-preload-791589
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (590.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-196700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-196700 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-912402 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f2bacdaf-81a0-49e5-9cb1-116d381043a0] Pending
helpers_test.go:344: "busybox" [f2bacdaf-81a0-49e5-9cb1-116d381043a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f2bacdaf-81a0-49e5-9cb1-116d381043a0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.014142473s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-912402 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-196700 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-196700 --alsologtostderr -v=3: (11.977281641s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-912402 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-912402 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-912402 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-912402 --alsologtostderr -v=3: (11.931944912s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-196700 -n embed-certs-196700
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-196700 -n embed-certs-196700: exit status 7 (60.869045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-196700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (340.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-196700 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:22:39.307288   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:22:39.312551   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:22:39.322780   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:22:39.343045   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:22:39.383633   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:22:39.464760   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:22:39.625167   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:22:39.946133   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:22:40.587006   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:22:41.867830   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-196700 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (5m40.322482669s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-196700 -n embed-certs-196700
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (340.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-912402 -n default-k8s-diff-port-912402
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-912402 -n default-k8s-diff-port-912402: exit status 7 (61.164369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-912402 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-912402 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:22:44.428613   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:22:45.495569   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:22:45.500827   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:22:45.511117   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:22:45.531434   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:22:45.571716   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:22:45.652038   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:22:45.812373   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:22:46.133458   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:22:46.773893   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:22:48.054234   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:22:49.549601   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:22:50.615036   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-912402 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (5m35.762394674s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-912402 -n default-k8s-diff-port-912402
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (336.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-630053 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e6bcb469-7603-44bc-a0dd-9414b3981db6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e6bcb469-7603-44bc-a0dd-9414b3981db6] Running
E0531 19:22:55.735540   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:22:59.789781   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.013076114s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-630053 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-630053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-630053 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-630053 --alsologtostderr -v=3
E0531 19:23:05.976021   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-630053 --alsologtostderr -v=3: (12.02761986s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-630053 -n old-k8s-version-630053
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-630053 -n old-k8s-version-630053: exit status 7 (61.690094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-630053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (66.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-630053 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0531 19:23:20.270059   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:23:26.457128   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:23:48.779609   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:48.784885   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:48.795265   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:48.815806   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:48.856103   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:48.936385   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:49.097403   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:49.417783   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:50.058877   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:51.339631   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:53.661488   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/addons-133126/client.crt: no such file or directory
E0531 19:23:53.899801   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:57.357532   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:23:57.362837   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:23:57.373153   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:23:57.393459   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:23:57.433834   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:23:57.514170   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:23:57.674559   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:23:57.995352   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:23:58.636463   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:23:59.020176   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:23:59.917563   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:24:01.230229   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:24:02.477896   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:24:07.417307   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:24:07.598574   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:24:09.261388   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:24:17.839296   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-630053 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m5.833565068s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-630053 -n old-k8s-version-630053
E0531 19:24:19.634955   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/ingress-addon-legacy-466444/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (66.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-st8pd" [442c4547-72f5-4159-a530-4e679e10ae58] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012951784s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-st8pd" [442c4547-72f5-4159-a530-4e679e10ae58] Running
E0531 19:24:29.741569   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006775446s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-630053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-630053 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-630053 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-630053 -n old-k8s-version-630053
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-630053 -n old-k8s-version-630053: exit status 2 (278.918904ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-630053 -n old-k8s-version-630053
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-630053 -n old-k8s-version-630053: exit status 2 (274.743725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-630053 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-630053 -n old-k8s-version-630053
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-630053 -n old-k8s-version-630053
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-490754 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:24:38.319999   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:25:10.702189   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/enable-default-cni-407111/client.crt: no such file or directory
E0531 19:25:10.749458   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:25:10.754714   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:25:10.765001   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:25:10.785291   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:25:10.825584   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:25:10.906228   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:25:11.066744   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:25:11.387191   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-490754 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (36.039141977s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-490754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0531 19:25:12.027433   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-490754 --alsologtostderr -v=3
E0531 19:25:13.307690   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-490754 --alsologtostderr -v=3: (1.212425319s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-490754 -n newest-cni-490754
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-490754 -n newest-cni-490754: exit status 7 (59.02272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-490754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-490754 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0531 19:25:15.868409   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:25:19.280138   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/bridge-407111/client.crt: no such file or directory
E0531 19:25:20.989067   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:25:23.150947   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/auto-407111/client.crt: no such file or directory
E0531 19:25:28.600361   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:28.605637   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:28.616028   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:28.636395   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:28.676684   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:28.757056   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:28.917204   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:29.087052   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:29.092355   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:29.102607   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:29.122935   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:29.163216   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:29.237359   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:29.243534   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:29.338018   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/flannel-407111/client.crt: no such file or directory
E0531 19:25:29.404202   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:29.724875   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:29.878264   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:30.365217   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:31.158707   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:31.230011   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/calico-407111/client.crt: no such file or directory
E0531 19:25:31.646062   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:33.718922   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
E0531 19:25:34.206809   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
E0531 19:25:38.839722   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/kindnet-407111/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-490754 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (25.439914114s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-490754 -n newest-cni-490754
E0531 19:25:39.326998   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/custom-flannel-407111/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-490754 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-490754 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-490754 -n newest-cni-490754
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-490754 -n newest-cni-490754: exit status 2 (280.962483ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-490754 -n newest-cni-490754
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-490754 -n newest-cni-490754: exit status 2 (279.723223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-490754 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-490754 -n newest-cni-490754
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-490754 -n newest-cni-490754
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-7x8xw" [2d64a77e-09eb-45c0-977f-7fbfd0456545] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-7x8xw" [2d64a77e-09eb-45c0-977f-7fbfd0456545] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.017430793s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-qgqqj" [38d13e09-658a-45c4-9f4b-c1f1d619d219] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-qgqqj" [38d13e09-658a-45c4-9f4b-c1f1d619d219] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.060545378s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-7x8xw" [2d64a77e-09eb-45c0-977f-7fbfd0456545] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007209591s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-196700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-196700 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-196700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-196700 -n embed-certs-196700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-196700 -n embed-certs-196700: exit status 2 (273.528339ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-196700 -n embed-certs-196700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-196700 -n embed-certs-196700: exit status 2 (282.556996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-196700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-196700 -n embed-certs-196700
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-196700 -n embed-certs-196700
E0531 19:28:33.614661   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/old-k8s-version-630053/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-qgqqj" [38d13e09-658a-45c4-9f4b-c1f1d619d219] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007439011s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-912402 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-912402 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-912402 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-912402 -n default-k8s-diff-port-912402
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-912402 -n default-k8s-diff-port-912402: exit status 2 (270.481878ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-912402 -n default-k8s-diff-port-912402
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-912402 -n default-k8s-diff-port-912402: exit status 2 (266.922624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-912402 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-912402 -n default-k8s-diff-port-912402
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-912402 -n default-k8s-diff-port-912402
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-w2n68" [195146f6-3ec8-4de7-aa22-2dc1ee03665b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013065735s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-w2n68" [195146f6-3ec8-4de7-aa22-2dc1ee03665b] Running
E0531 19:32:17.964465   14232 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/functional-744804/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006465193s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-791589 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-791589 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-791589 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-791589 -n no-preload-791589
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-791589 -n no-preload-791589: exit status 2 (266.266212ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-791589 -n no-preload-791589
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-791589 -n no-preload-791589: exit status 2 (270.766051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-791589 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-791589 -n no-preload-791589
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-791589 -n no-preload-791589
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.52s)

                                                
                                    

Test skip (23/302)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:458: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:92: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-407111 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-407111" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-407111" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16569-7270/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 31 May 2023 19:13:45 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-338730
contexts:
- context:
cluster: NoKubernetes-338730
extensions:
- extension:
last-update: Wed, 31 May 2023 19:13:45 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: context_info
namespace: default
user: NoKubernetes-338730
name: NoKubernetes-338730
current-context: NoKubernetes-338730
kind: Config
preferences: {}
users:
- name: NoKubernetes-338730
user:
client-certificate: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/NoKubernetes-338730/client.crt
client-key: /home/jenkins/minikube-integration/16569-7270/.minikube/profiles/NoKubernetes-338730/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-407111

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-407111"

                                                
                                                
----------------------- debugLogs end: kubenet-407111 [took: 3.844522056s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-407111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-407111
--- SKIP: TestNetworkPlugins/group/kubenet (3.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-407111 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-407111" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-407111

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-407111" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-407111"

                                                
                                                
----------------------- debugLogs end: cilium-407111 [took: 3.57516463s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-407111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-407111
--- SKIP: TestNetworkPlugins/group/cilium (3.73s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-952398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-952398
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard