Test Report: Docker_Linux_crio 15074

                    
                      0bb29fe744a5c7c8bbbb0deb1ac8f2e2fc2fbd4c:2023-06-10:29641
                    
                

Test fail (6/302)

Order failed test Duration
25 TestAddons/parallel/Ingress 150.51
113 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 10.85
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 172.97
202 TestMultiNode/serial/PingHostFrom2Pods 3.32
223 TestRunningBinaryUpgrade 64.77
249 TestStoppedBinaryUpgrade/Upgrade 92.3
x
+
TestAddons/parallel/Ingress (150.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-060929 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-060929 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-060929 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [36721655-4474-43b1-89e5-d0faea45d7cb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [36721655-4474-43b1-89e5-d0faea45d7cb] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005324808s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-060929 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-060929 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.507503882s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-060929 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-060929 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-060929 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-060929 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-060929 addons disable ingress --alsologtostderr -v=1: (7.430391965s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-060929
helpers_test.go:235: (dbg) docker inspect addons-060929:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "abd5aecf4a3151a313bfc164cf4107565a35f47d10980ccd8ff3ff4ea8c1364c",
	        "Created": "2023-06-10T14:01:57.247785038Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-10T14:01:57.542318374Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/abd5aecf4a3151a313bfc164cf4107565a35f47d10980ccd8ff3ff4ea8c1364c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/abd5aecf4a3151a313bfc164cf4107565a35f47d10980ccd8ff3ff4ea8c1364c/hostname",
	        "HostsPath": "/var/lib/docker/containers/abd5aecf4a3151a313bfc164cf4107565a35f47d10980ccd8ff3ff4ea8c1364c/hosts",
	        "LogPath": "/var/lib/docker/containers/abd5aecf4a3151a313bfc164cf4107565a35f47d10980ccd8ff3ff4ea8c1364c/abd5aecf4a3151a313bfc164cf4107565a35f47d10980ccd8ff3ff4ea8c1364c-json.log",
	        "Name": "/addons-060929",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-060929:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-060929",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4f286e5dd8af6769e544cf37cac3c43066849280c972b4becc8f1652a2622bd8-init/diff:/var/lib/docker/overlay2/0dc1ddb6d62b4bee9beafd5f34260acd069d63ff74f1b10678aeef7f32badeb3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f286e5dd8af6769e544cf37cac3c43066849280c972b4becc8f1652a2622bd8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f286e5dd8af6769e544cf37cac3c43066849280c972b4becc8f1652a2622bd8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f286e5dd8af6769e544cf37cac3c43066849280c972b4becc8f1652a2622bd8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-060929",
	                "Source": "/var/lib/docker/volumes/addons-060929/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-060929",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-060929",
	                "name.minikube.sigs.k8s.io": "addons-060929",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "96d503c7a5123a65e7ed69d30374d915773e67eec2fe5ad0c3bb8e54fd1006a6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/96d503c7a512",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-060929": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "abd5aecf4a31",
	                        "addons-060929"
	                    ],
	                    "NetworkID": "83b60bb648f03b40b2dfc1c381f37816b71d06dee5a29cc9e61c9a05222f6a45",
	                    "EndpointID": "8f1cf7d9fe1cafe784d1563f6bba55acbf921d2a8416d232784a84074913c9f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-060929 -n addons-060929
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-060929 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-060929 logs -n 25: (1.099732567s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-735343   | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC |                     |
	|         | -p download-only-735343        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-735343   | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC |                     |
	|         | -p download-only-735343        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.2   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC | 10 Jun 23 14:01 UTC |
	| delete  | -p download-only-735343        | download-only-735343   | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC | 10 Jun 23 14:01 UTC |
	| delete  | -p download-only-735343        | download-only-735343   | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC | 10 Jun 23 14:01 UTC |
	| start   | --download-only -p             | download-docker-347332 | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC |                     |
	|         | download-docker-347332         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-347332      | download-docker-347332 | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC | 10 Jun 23 14:01 UTC |
	| start   | --download-only -p             | binary-mirror-545398   | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC |                     |
	|         | binary-mirror-545398           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38625         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-545398        | binary-mirror-545398   | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC | 10 Jun 23 14:01 UTC |
	| start   | -p addons-060929               | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC | 10 Jun 23 14:03 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	|         | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:03 UTC | 10 Jun 23 14:03 UTC |
	|         | -p addons-060929               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-060929 addons           | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:03 UTC | 10 Jun 23 14:03 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-060929 ip               | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:03 UTC | 10 Jun 23 14:03 UTC |
	| addons  | addons-060929 addons disable   | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:03 UTC | 10 Jun 23 14:03 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-060929 addons disable   | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:03 UTC | 10 Jun 23 14:03 UTC |
	|         | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:03 UTC | 10 Jun 23 14:03 UTC |
	|         | addons-060929                  |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:03 UTC | 10 Jun 23 14:03 UTC |
	|         | addons-060929                  |                        |         |         |                     |                     |
	| ssh     | addons-060929 ssh curl -s      | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:04 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-060929 addons           | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:04 UTC | 10 Jun 23 14:04 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-060929 addons           | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:04 UTC | 10 Jun 23 14:04 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-060929 ip               | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:06 UTC | 10 Jun 23 14:06 UTC |
	| addons  | addons-060929 addons disable   | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:06 UTC | 10 Jun 23 14:06 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-060929 addons disable   | addons-060929          | jenkins | v1.30.1 | 10 Jun 23 14:06 UTC | 10 Jun 23 14:06 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 14:01:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 14:01:34.199655   26375 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:01:34.199762   26375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:01:34.199771   26375 out.go:309] Setting ErrFile to fd 2...
	I0610 14:01:34.199775   26375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:01:34.199875   26375 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	I0610 14:01:34.200417   26375 out.go:303] Setting JSON to false
	I0610 14:01:34.201152   26375 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6249,"bootTime":1686399445,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 14:01:34.201199   26375 start.go:137] virtualization: kvm guest
	I0610 14:01:34.203654   26375 out.go:177] * [addons-060929] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 14:01:34.205218   26375 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 14:01:34.206638   26375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 14:01:34.205177   26375 notify.go:220] Checking for updates...
	I0610 14:01:34.208866   26375 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:01:34.210435   26375 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	I0610 14:01:34.211920   26375 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 14:01:34.213404   26375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 14:01:34.215041   26375 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 14:01:34.237397   26375 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 14:01:34.237476   26375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:01:34.279164   26375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-06-10 14:01:34.271245827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:01:34.279280   26375 docker.go:294] overlay module found
	I0610 14:01:34.281324   26375 out.go:177] * Using the docker driver based on user configuration
	I0610 14:01:34.282776   26375 start.go:297] selected driver: docker
	I0610 14:01:34.282788   26375 start.go:875] validating driver "docker" against <nil>
	I0610 14:01:34.282799   26375 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 14:01:34.283545   26375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:01:34.328182   26375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-06-10 14:01:34.320616807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:01:34.328363   26375 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 14:01:34.328622   26375 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 14:01:34.330514   26375 out.go:177] * Using Docker driver with root privileges
	I0610 14:01:34.332184   26375 cni.go:84] Creating CNI manager for ""
	I0610 14:01:34.332195   26375 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0610 14:01:34.332201   26375 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 14:01:34.332211   26375 start_flags.go:319] config:
	{Name:addons-060929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-060929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:01:34.333937   26375 out.go:177] * Starting control plane node addons-060929 in cluster addons-060929
	I0610 14:01:34.335500   26375 cache.go:122] Beginning downloading kic base image for docker with crio
	I0610 14:01:34.337031   26375 out.go:177] * Pulling base image ...
	I0610 14:01:34.338463   26375 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0610 14:01:34.338494   26375 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4
	I0610 14:01:34.338504   26375 cache.go:57] Caching tarball of preloaded images
	I0610 14:01:34.338546   26375 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 14:01:34.338561   26375 preload.go:174] Found /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 14:01:34.338567   26375 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0610 14:01:34.338819   26375 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/config.json ...
	I0610 14:01:34.338836   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/config.json: {Name:mkdd6b4cda7931dbb8b86eda8e1d906fd642542e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:01:34.352584   26375 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b to local cache
	I0610 14:01:34.352686   26375 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory
	I0610 14:01:34.352711   26375 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory, skipping pull
	I0610 14:01:34.352717   26375 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in cache, skipping pull
	I0610 14:01:34.352731   26375 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b as a tarball
	I0610 14:01:34.352741   26375 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b from local cache
	I0610 14:01:44.874670   26375 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b from cached tarball
	I0610 14:01:44.874717   26375 cache.go:195] Successfully downloaded all kic artifacts
	I0610 14:01:44.874752   26375 start.go:364] acquiring machines lock for addons-060929: {Name:mk19acad9c0214fec1442ef929dc6e8917ac8dbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:01:44.874850   26375 start.go:368] acquired machines lock for "addons-060929" in 78.8µs
	I0610 14:01:44.874875   26375 start.go:93] Provisioning new machine with config: &{Name:addons-060929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-060929 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 14:01:44.874976   26375 start.go:125] createHost starting for "" (driver="docker")
	I0610 14:01:44.876891   26375 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0610 14:01:44.877142   26375 start.go:159] libmachine.API.Create for "addons-060929" (driver="docker")
	I0610 14:01:44.877172   26375 client.go:168] LocalClient.Create starting
	I0610 14:01:44.877293   26375 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem
	I0610 14:01:45.024266   26375 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem
	I0610 14:01:45.350232   26375 cli_runner.go:164] Run: docker network inspect addons-060929 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0610 14:01:45.365022   26375 cli_runner.go:211] docker network inspect addons-060929 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0610 14:01:45.365093   26375 network_create.go:281] running [docker network inspect addons-060929] to gather additional debugging logs...
	I0610 14:01:45.365121   26375 cli_runner.go:164] Run: docker network inspect addons-060929
	W0610 14:01:45.378357   26375 cli_runner.go:211] docker network inspect addons-060929 returned with exit code 1
	I0610 14:01:45.378384   26375 network_create.go:284] error running [docker network inspect addons-060929]: docker network inspect addons-060929: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-060929 not found
	I0610 14:01:45.378425   26375 network_create.go:286] output of [docker network inspect addons-060929]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-060929 not found
	
	** /stderr **
	I0610 14:01:45.378470   26375 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 14:01:45.392448   26375 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014f0a40}
	I0610 14:01:45.392483   26375 network_create.go:123] attempt to create docker network addons-060929 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0610 14:01:45.392535   26375 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-060929 addons-060929
	I0610 14:01:45.440513   26375 network_create.go:107] docker network addons-060929 192.168.49.0/24 created
	I0610 14:01:45.440538   26375 kic.go:117] calculated static IP "192.168.49.2" for the "addons-060929" container
	I0610 14:01:45.440592   26375 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0610 14:01:45.454556   26375 cli_runner.go:164] Run: docker volume create addons-060929 --label name.minikube.sigs.k8s.io=addons-060929 --label created_by.minikube.sigs.k8s.io=true
	I0610 14:01:45.469487   26375 oci.go:103] Successfully created a docker volume addons-060929
	I0610 14:01:45.469542   26375 cli_runner.go:164] Run: docker run --rm --name addons-060929-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060929 --entrypoint /usr/bin/test -v addons-060929:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib
	I0610 14:01:52.450582   26375 cli_runner.go:217] Completed: docker run --rm --name addons-060929-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060929 --entrypoint /usr/bin/test -v addons-060929:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib: (6.980971957s)
	I0610 14:01:52.450684   26375 oci.go:107] Successfully prepared a docker volume addons-060929
	I0610 14:01:52.450717   26375 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0610 14:01:52.450743   26375 kic.go:190] Starting extracting preloaded images to volume ...
	I0610 14:01:52.450804   26375 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-060929:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir
	I0610 14:01:57.185485   26375 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-060929:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir: (4.734626008s)
	I0610 14:01:57.185519   26375 kic.go:199] duration metric: took 4.734770 seconds to extract preloaded images to volume
	W0610 14:01:57.185647   26375 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0610 14:01:57.185750   26375 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0610 14:01:57.234997   26375 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-060929 --name addons-060929 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-060929 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-060929 --network addons-060929 --ip 192.168.49.2 --volume addons-060929:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
	I0610 14:01:57.550331   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Running}}
	I0610 14:01:57.566303   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:01:57.581830   26375 cli_runner.go:164] Run: docker exec addons-060929 stat /var/lib/dpkg/alternatives/iptables
	I0610 14:01:57.644864   26375 oci.go:144] the created container "addons-060929" has a running status.
	I0610 14:01:57.644895   26375 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa...
	I0610 14:01:57.872445   26375 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0610 14:01:57.904026   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:01:57.921678   26375 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0610 14:01:57.921701   26375 kic_runner.go:114] Args: [docker exec --privileged addons-060929 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0610 14:01:57.993191   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:01:58.009775   26375 machine.go:88] provisioning docker machine ...
	I0610 14:01:58.009809   26375 ubuntu.go:169] provisioning hostname "addons-060929"
	I0610 14:01:58.009870   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:01:58.029756   26375 main.go:141] libmachine: Using SSH client type: native
	I0610 14:01:58.030143   26375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0610 14:01:58.030152   26375 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-060929 && echo "addons-060929" | sudo tee /etc/hostname
	I0610 14:01:58.172494   26375 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-060929
	
	I0610 14:01:58.172563   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:01:58.188871   26375 main.go:141] libmachine: Using SSH client type: native
	I0610 14:01:58.189303   26375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0610 14:01:58.189333   26375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-060929' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-060929/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-060929' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 14:01:58.305680   26375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 14:01:58.305702   26375 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15074-18675/.minikube CaCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15074-18675/.minikube}
	I0610 14:01:58.305722   26375 ubuntu.go:177] setting up certificates
	I0610 14:01:58.305732   26375 provision.go:83] configureAuth start
	I0610 14:01:58.305781   26375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060929
	I0610 14:01:58.321534   26375 provision.go:138] copyHostCerts
	I0610 14:01:58.321590   26375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem (1078 bytes)
	I0610 14:01:58.321682   26375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem (1123 bytes)
	I0610 14:01:58.321733   26375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem (1675 bytes)
	I0610 14:01:58.321775   26375 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem org=jenkins.addons-060929 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-060929]
	I0610 14:01:58.435843   26375 provision.go:172] copyRemoteCerts
	I0610 14:01:58.435890   26375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 14:01:58.435922   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:01:58.450544   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:01:58.533674   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 14:01:58.553011   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 14:01:58.572115   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 14:01:58.590823   26375 provision.go:86] duration metric: configureAuth took 285.076698ms
	I0610 14:01:58.590841   26375 ubuntu.go:193] setting minikube options for container-runtime
	I0610 14:01:58.590974   26375 config.go:182] Loaded profile config "addons-060929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:01:58.591059   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:01:58.606856   26375 main.go:141] libmachine: Using SSH client type: native
	I0610 14:01:58.607233   26375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0610 14:01:58.607252   26375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 14:01:58.798309   26375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 14:01:58.798334   26375 machine.go:91] provisioned docker machine in 788.537525ms
	I0610 14:01:58.798345   26375 client.go:171] LocalClient.Create took 13.921161493s
	I0610 14:01:58.798364   26375 start.go:167] duration metric: libmachine.API.Create for "addons-060929" took 13.92122259s
	I0610 14:01:58.798373   26375 start.go:300] post-start starting for "addons-060929" (driver="docker")
	I0610 14:01:58.798384   26375 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 14:01:58.798444   26375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 14:01:58.798491   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:01:58.813961   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:01:58.902023   26375 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 14:01:58.904623   26375 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0610 14:01:58.904658   26375 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0610 14:01:58.904677   26375 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0610 14:01:58.904689   26375 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0610 14:01:58.904701   26375 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/addons for local assets ...
	I0610 14:01:58.904759   26375 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/files for local assets ...
	I0610 14:01:58.904787   26375 start.go:303] post-start completed in 106.403213ms
	I0610 14:01:58.905065   26375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060929
	I0610 14:01:58.919926   26375 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/config.json ...
	I0610 14:01:58.920129   26375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 14:01:58.920160   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:01:58.935553   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:01:59.014253   26375 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0610 14:01:59.017768   26375 start.go:128] duration metric: createHost completed in 14.142780331s
	I0610 14:01:59.017786   26375 start.go:83] releasing machines lock for "addons-060929", held for 14.142923335s
	I0610 14:01:59.017847   26375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-060929
	I0610 14:01:59.032886   26375 ssh_runner.go:195] Run: cat /version.json
	I0610 14:01:59.032931   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:01:59.032992   26375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 14:01:59.033055   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:01:59.048728   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:01:59.049822   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:01:59.250510   26375 ssh_runner.go:195] Run: systemctl --version
	I0610 14:01:59.254112   26375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 14:01:59.386921   26375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 14:01:59.390646   26375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:01:59.406697   26375 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0610 14:01:59.406762   26375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:01:59.430700   26375 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0610 14:01:59.430719   26375 start.go:481] detecting cgroup driver to use...
	I0610 14:01:59.430743   26375 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0610 14:01:59.430779   26375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 14:01:59.443363   26375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 14:01:59.452068   26375 docker.go:193] disabling cri-docker service (if available) ...
	I0610 14:01:59.452108   26375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 14:01:59.462804   26375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 14:01:59.473529   26375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 14:01:59.539789   26375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 14:01:59.611069   26375 docker.go:209] disabling docker service ...
	I0610 14:01:59.611125   26375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 14:01:59.625926   26375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 14:01:59.634769   26375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 14:01:59.707880   26375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 14:01:59.780172   26375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 14:01:59.789179   26375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 14:01:59.802416   26375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 14:01:59.802474   26375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:01:59.810104   26375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 14:01:59.810149   26375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:01:59.818085   26375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:01:59.825416   26375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:01:59.832780   26375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 14:01:59.839667   26375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 14:01:59.846225   26375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 14:01:59.852597   26375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 14:01:59.928046   26375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 14:02:00.022583   26375 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 14:02:00.022650   26375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 14:02:00.025478   26375 start.go:549] Will wait 60s for crictl version
	I0610 14:02:00.025528   26375 ssh_runner.go:195] Run: which crictl
	I0610 14:02:00.028079   26375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 14:02:00.058069   26375 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0610 14:02:00.058174   26375 ssh_runner.go:195] Run: crio --version
	I0610 14:02:00.088817   26375 ssh_runner.go:195] Run: crio --version
	I0610 14:02:00.119660   26375 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0610 14:02:00.121133   26375 cli_runner.go:164] Run: docker network inspect addons-060929 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 14:02:00.135444   26375 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0610 14:02:00.138589   26375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 14:02:00.147699   26375 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0610 14:02:00.147767   26375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 14:02:00.191985   26375 crio.go:496] all images are preloaded for cri-o runtime.
	I0610 14:02:00.192002   26375 crio.go:415] Images already preloaded, skipping extraction
	I0610 14:02:00.192040   26375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 14:02:00.220202   26375 crio.go:496] all images are preloaded for cri-o runtime.
	I0610 14:02:00.220229   26375 cache_images.go:84] Images are preloaded, skipping loading
	I0610 14:02:00.220283   26375 ssh_runner.go:195] Run: crio config
	I0610 14:02:00.258656   26375 cni.go:84] Creating CNI manager for ""
	I0610 14:02:00.258674   26375 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0610 14:02:00.258687   26375 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 14:02:00.258709   26375 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-060929 NodeName:addons-060929 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 14:02:00.258884   26375 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-060929"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 14:02:00.258974   26375 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-060929 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:addons-060929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 14:02:00.259027   26375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 14:02:00.266838   26375 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 14:02:00.266899   26375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 14:02:00.273990   26375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0610 14:02:00.287870   26375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 14:02:00.301660   26375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0610 14:02:00.315097   26375 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0610 14:02:00.317639   26375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 14:02:00.326002   26375 certs.go:56] Setting up /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929 for IP: 192.168.49.2
	I0610 14:02:00.326023   26375 certs.go:190] acquiring lock for shared ca certs: {Name:mk47e57fed67616a983122d88149f57794c568cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:00.326130   26375 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key
	I0610 14:02:00.459446   26375 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt ...
	I0610 14:02:00.459469   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt: {Name:mkda6947bb72a55fec4aea51fd2c854a22112379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:00.459603   26375 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key ...
	I0610 14:02:00.459613   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key: {Name:mk2b2ed2c5c39e1559facba567ed123912df2df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:00.459676   26375 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key
	I0610 14:02:00.813824   26375 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.crt ...
	I0610 14:02:00.813851   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.crt: {Name:mkd7bd66f9baa5b8b84f8abbe708cb536082a634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:00.814031   26375 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key ...
	I0610 14:02:00.814044   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key: {Name:mkb8c6a065cea00dd080df7554efdfd96661797b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:00.814167   26375 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.key
	I0610 14:02:00.814185   26375 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt with IP's: []
	I0610 14:02:01.142216   26375 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt ...
	I0610 14:02:01.142244   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: {Name:mk50cf2519d956602c23d95c895e93dbdd2dccb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:01.142408   26375 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.key ...
	I0610 14:02:01.142420   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.key: {Name:mkc877e47018454b6841125751c9cc2411bdcfbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:01.142492   26375 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.key.dd3b5fb2
	I0610 14:02:01.142510   26375 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 14:02:01.258887   26375 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.crt.dd3b5fb2 ...
	I0610 14:02:01.258918   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.crt.dd3b5fb2: {Name:mk849ad6edb885afaac68f844a3083d85a03e209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:01.259090   26375 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.key.dd3b5fb2 ...
	I0610 14:02:01.259105   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.key.dd3b5fb2: {Name:mkfb128921c76396fa50b1f7c1f1b749c8139a26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:01.259199   26375 certs.go:337] copying /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.crt
	I0610 14:02:01.259304   26375 certs.go:341] copying /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.key
	I0610 14:02:01.259371   26375 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/proxy-client.key
	I0610 14:02:01.259393   26375 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/proxy-client.crt with IP's: []
	I0610 14:02:01.520389   26375 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/proxy-client.crt ...
	I0610 14:02:01.520417   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/proxy-client.crt: {Name:mk092131bb7c2f1faabb2454aca671da677b6738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:01.520584   26375 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/proxy-client.key ...
	I0610 14:02:01.520597   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/proxy-client.key: {Name:mk5a03dfe93e776023792f23c8e1a7199bf023d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:01.520793   26375 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 14:02:01.520837   26375 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem (1078 bytes)
	I0610 14:02:01.520871   26375 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem (1123 bytes)
	I0610 14:02:01.520906   26375 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem (1675 bytes)
	I0610 14:02:01.521404   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 14:02:01.541620   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 14:02:01.560093   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 14:02:01.578947   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 14:02:01.597875   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 14:02:01.616488   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 14:02:01.635026   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 14:02:01.653392   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 14:02:01.672081   26375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 14:02:01.691093   26375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 14:02:01.705102   26375 ssh_runner.go:195] Run: openssl version
	I0610 14:02:01.709708   26375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 14:02:01.717051   26375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:02:01.719891   26375 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 14:02 /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:02:01.719935   26375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:02:01.725755   26375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 14:02:01.733398   26375 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 14:02:01.736087   26375 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 14:02:01.736134   26375 kubeadm.go:404] StartCluster: {Name:addons-060929 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-060929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:02:01.736203   26375 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 14:02:01.736237   26375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 14:02:01.765791   26375 cri.go:88] found id: ""
	I0610 14:02:01.765849   26375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 14:02:01.772998   26375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 14:02:01.779945   26375 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0610 14:02:01.779991   26375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 14:02:01.786825   26375 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 14:02:01.786864   26375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0610 14:02:01.857412   26375 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1035-gcp\n", err: exit status 1
	I0610 14:02:01.912226   26375 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 14:02:01.912499   26375 kubeadm.go:322] W0610 14:02:01.912167    1154 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 14:02:04.243989   26375 kubeadm.go:322] W0610 14:02:04.243638    1154 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 14:02:10.268156   26375 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 14:02:10.268224   26375 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 14:02:10.268343   26375 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0610 14:02:10.268438   26375 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1035-gcp
	I0610 14:02:10.268494   26375 kubeadm.go:322] OS: Linux
	I0610 14:02:10.268569   26375 kubeadm.go:322] CGROUPS_CPU: enabled
	I0610 14:02:10.268651   26375 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0610 14:02:10.268715   26375 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0610 14:02:10.268787   26375 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0610 14:02:10.268857   26375 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0610 14:02:10.268938   26375 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0610 14:02:10.269013   26375 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0610 14:02:10.269081   26375 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0610 14:02:10.269144   26375 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0610 14:02:10.269243   26375 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 14:02:10.269359   26375 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 14:02:10.269495   26375 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 14:02:10.269585   26375 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 14:02:10.271191   26375 out.go:204]   - Generating certificates and keys ...
	I0610 14:02:10.271282   26375 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 14:02:10.271403   26375 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 14:02:10.271519   26375 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 14:02:10.271612   26375 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 14:02:10.271691   26375 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 14:02:10.271761   26375 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 14:02:10.271841   26375 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 14:02:10.271968   26375 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-060929 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0610 14:02:10.272032   26375 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 14:02:10.272155   26375 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-060929 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0610 14:02:10.272225   26375 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 14:02:10.272311   26375 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 14:02:10.272403   26375 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 14:02:10.272483   26375 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 14:02:10.272561   26375 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 14:02:10.272647   26375 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 14:02:10.272757   26375 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 14:02:10.272836   26375 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 14:02:10.273089   26375 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 14:02:10.273266   26375 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 14:02:10.273322   26375 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 14:02:10.273402   26375 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 14:02:10.276190   26375 out.go:204]   - Booting up control plane ...
	I0610 14:02:10.276312   26375 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 14:02:10.276389   26375 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 14:02:10.276443   26375 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 14:02:10.276513   26375 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 14:02:10.276645   26375 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 14:02:10.276711   26375 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.501933 seconds
	I0610 14:02:10.276798   26375 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 14:02:10.276897   26375 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 14:02:10.276952   26375 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 14:02:10.277139   26375 kubeadm.go:322] [mark-control-plane] Marking the node addons-060929 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 14:02:10.277197   26375 kubeadm.go:322] [bootstrap-token] Using token: ln0kaw.es1qelyvygb5zicp
	I0610 14:02:10.279388   26375 out.go:204]   - Configuring RBAC rules ...
	I0610 14:02:10.279479   26375 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 14:02:10.279559   26375 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 14:02:10.279766   26375 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 14:02:10.279932   26375 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 14:02:10.280093   26375 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 14:02:10.280231   26375 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 14:02:10.280335   26375 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 14:02:10.280398   26375 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 14:02:10.280460   26375 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 14:02:10.280473   26375 kubeadm.go:322] 
	I0610 14:02:10.280549   26375 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 14:02:10.280562   26375 kubeadm.go:322] 
	I0610 14:02:10.280651   26375 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 14:02:10.280660   26375 kubeadm.go:322] 
	I0610 14:02:10.280721   26375 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 14:02:10.280893   26375 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 14:02:10.280963   26375 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 14:02:10.280977   26375 kubeadm.go:322] 
	I0610 14:02:10.281051   26375 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 14:02:10.281059   26375 kubeadm.go:322] 
	I0610 14:02:10.281124   26375 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 14:02:10.281135   26375 kubeadm.go:322] 
	I0610 14:02:10.281206   26375 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 14:02:10.281309   26375 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 14:02:10.281365   26375 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 14:02:10.281370   26375 kubeadm.go:322] 
	I0610 14:02:10.281439   26375 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 14:02:10.281524   26375 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 14:02:10.281535   26375 kubeadm.go:322] 
	I0610 14:02:10.281609   26375 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ln0kaw.es1qelyvygb5zicp \
	I0610 14:02:10.281691   26375 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f7c27fba2457aced24afc8e692292ec6bc66665a6c8292c6979f6ce9f519ecd4 \
	I0610 14:02:10.281714   26375 kubeadm.go:322] 	--control-plane 
	I0610 14:02:10.281723   26375 kubeadm.go:322] 
	I0610 14:02:10.281795   26375 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 14:02:10.281802   26375 kubeadm.go:322] 
	I0610 14:02:10.281864   26375 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ln0kaw.es1qelyvygb5zicp \
	I0610 14:02:10.281962   26375 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f7c27fba2457aced24afc8e692292ec6bc66665a6c8292c6979f6ce9f519ecd4 
	I0610 14:02:10.281972   26375 cni.go:84] Creating CNI manager for ""
	I0610 14:02:10.281977   26375 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0610 14:02:10.283674   26375 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 14:02:10.285037   26375 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 14:02:10.288324   26375 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0610 14:02:10.288338   26375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 14:02:10.302837   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 14:02:10.940688   26375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 14:02:10.940748   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:10.940774   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=3273891fc7fc0f39c65075197baa2d52fc489f6f minikube.k8s.io/name=addons-060929 minikube.k8s.io/updated_at=2023_06_10T14_02_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:11.003208   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:11.009883   26375 ops.go:34] apiserver oom_adj: -16
	I0610 14:02:11.572494   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:12.071895   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:12.571937   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:13.072478   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:13.572241   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:14.072220   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:14.572188   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:15.072883   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:15.572458   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:16.072623   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:16.572530   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:17.072040   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:17.572094   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:18.072411   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:18.572241   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:19.071890   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:19.572728   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:20.072414   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:20.572521   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:21.071994   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:21.572907   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:22.072313   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:22.572729   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:23.071945   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:23.572957   26375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:02:23.635115   26375 kubeadm.go:1076] duration metric: took 12.694415341s to wait for elevateKubeSystemPrivileges.
	I0610 14:02:23.635147   26375 kubeadm.go:406] StartCluster complete in 21.899017535s
	I0610 14:02:23.635167   26375 settings.go:142] acquiring lock: {Name:mk5881f609c073bbe2e65c237b3cf267f8761582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:23.635276   26375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:02:23.635722   26375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/kubeconfig: {Name:mk5649556a15e88039256d0bd607afdddb4a6ce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:02:23.635912   26375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 14:02:23.635998   26375 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0610 14:02:23.636121   26375 addons.go:66] Setting helm-tiller=true in profile "addons-060929"
	I0610 14:02:23.636133   26375 addons.go:66] Setting storage-provisioner=true in profile "addons-060929"
	I0610 14:02:23.636135   26375 addons.go:66] Setting ingress=true in profile "addons-060929"
	I0610 14:02:23.636141   26375 addons.go:66] Setting cloud-spanner=true in profile "addons-060929"
	I0610 14:02:23.636145   26375 addons.go:228] Setting addon helm-tiller=true in "addons-060929"
	I0610 14:02:23.636145   26375 addons.go:228] Setting addon storage-provisioner=true in "addons-060929"
	I0610 14:02:23.636153   26375 addons.go:228] Setting addon ingress=true in "addons-060929"
	I0610 14:02:23.636159   26375 addons.go:66] Setting ingress-dns=true in profile "addons-060929"
	I0610 14:02:23.636158   26375 addons.go:66] Setting default-storageclass=true in profile "addons-060929"
	I0610 14:02:23.636156   26375 addons.go:66] Setting registry=true in profile "addons-060929"
	I0610 14:02:23.636201   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.636208   26375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-060929"
	I0610 14:02:23.636216   26375 addons.go:228] Setting addon registry=true in "addons-060929"
	I0610 14:02:23.636168   26375 addons.go:228] Setting addon ingress-dns=true in "addons-060929"
	I0610 14:02:23.636136   26375 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-060929"
	I0610 14:02:23.636323   26375 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-060929"
	I0610 14:02:23.636153   26375 addons.go:228] Setting addon cloud-spanner=true in "addons-060929"
	I0610 14:02:23.636372   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.636384   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.636201   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.636128   26375 addons.go:66] Setting metrics-server=true in profile "addons-060929"
	I0610 14:02:23.636487   26375 addons.go:228] Setting addon metrics-server=true in "addons-060929"
	I0610 14:02:23.636523   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.636555   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.636728   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.637123   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.637318   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.637340   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.637372   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.636126   26375 addons.go:66] Setting inspektor-gadget=true in profile "addons-060929"
	I0610 14:02:23.636184   26375 addons.go:66] Setting gcp-auth=true in profile "addons-060929"
	I0610 14:02:23.636121   26375 addons.go:66] Setting volumesnapshots=true in profile "addons-060929"
	I0610 14:02:23.637961   26375 addons.go:228] Setting addon volumesnapshots=true in "addons-060929"
	I0610 14:02:23.638001   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.636205   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.638598   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.639122   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.636264   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.639321   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.636126   26375 config.go:182] Loaded profile config "addons-060929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:02:23.637805   26375 addons.go:228] Setting addon inspektor-gadget=true in "addons-060929"
	I0610 14:02:23.639871   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.637868   26375 mustload.go:65] Loading cluster: addons-060929
	I0610 14:02:23.642049   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.647879   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.648106   26375 config.go:182] Loaded profile config "addons-060929": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:02:23.648341   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.648759   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.672487   26375 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0610 14:02:23.674861   26375 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0610 14:02:23.675011   26375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 14:02:23.675128   26375 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0610 14:02:23.676539   26375 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0610 14:02:23.681641   26375 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 14:02:23.681662   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 14:02:23.681706   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:23.676965   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0610 14:02:23.681823   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:23.674898   26375 out.go:177]   - Using image docker.io/registry:2.8.1
	I0610 14:02:23.683583   26375 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.6
	I0610 14:02:23.683592   26375 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0610 14:02:23.685141   26375 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0610 14:02:23.685156   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0610 14:02:23.685197   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:23.683586   26375 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0610 14:02:23.686808   26375 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0610 14:02:23.688471   26375 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0610 14:02:23.693671   26375 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0610 14:02:23.693685   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0610 14:02:23.693727   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:23.695406   26375 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0610 14:02:23.691817   26375 addons.go:228] Setting addon default-storageclass=true in "addons-060929"
	I0610 14:02:23.691830   26375 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.0
	I0610 14:02:23.691885   26375 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0610 14:02:23.702855   26375 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0610 14:02:23.704456   26375 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0610 14:02:23.705939   26375 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0610 14:02:23.707441   26375 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0610 14:02:23.708825   26375 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0610 14:02:23.708843   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0610 14:02:23.708898   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:23.696991   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.697034   26375 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 14:02:23.713507   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 14:02:23.713562   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:23.697052   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0610 14:02:23.713922   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:23.715351   26375 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0610 14:02:23.715434   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:23.717236   26375 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0610 14:02:23.718940   26375 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0610 14:02:23.718956   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0610 14:02:23.718996   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:23.732864   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:23.735969   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:23.746785   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:23.747342   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:23.751613   26375 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.17.0
	I0610 14:02:23.753093   26375 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0610 14:02:23.753106   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0610 14:02:23.752264   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:23.753143   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:23.754830   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:23.756981   26375 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 14:02:23.757004   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 14:02:23.757051   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:23.777972   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:23.782356   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:23.789222   26375 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0610 14:02:23.790879   26375 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 14:02:23.790897   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0610 14:02:23.790947   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:23.793340   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:23.795958   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:23.802189   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:23.814886   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:23.862480   26375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 14:02:23.961173   26375 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0610 14:02:23.961197   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0610 14:02:24.060962   26375 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 14:02:24.061040   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0610 14:02:24.074100   26375 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0610 14:02:24.074176   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0610 14:02:24.082149   26375 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0610 14:02:24.082229   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0610 14:02:24.160421   26375 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0610 14:02:24.160457   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0610 14:02:24.161329   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 14:02:24.162311   26375 addons.go:420] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0610 14:02:24.162329   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0610 14:02:24.163001   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0610 14:02:24.172046   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 14:02:24.176962   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0610 14:02:24.179216   26375 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 14:02:24.179279   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 14:02:24.265077   26375 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0610 14:02:24.265150   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0610 14:02:24.266654   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 14:02:24.274862   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0610 14:02:24.275767   26375 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-060929" context rescaled to 1 replicas
	I0610 14:02:24.275839   26375 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 14:02:24.278191   26375 out.go:177] * Verifying Kubernetes components...
	I0610 14:02:24.276560   26375 addons.go:420] installing /etc/kubernetes/addons/registry-svc.yaml
	I0610 14:02:24.279952   26375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 14:02:24.279958   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0610 14:02:24.282184   26375 addons.go:420] installing /etc/kubernetes/addons/ig-role.yaml
	I0610 14:02:24.282227   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0610 14:02:24.368310   26375 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0610 14:02:24.368391   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0610 14:02:24.371248   26375 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 14:02:24.371271   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 14:02:24.465025   26375 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0610 14:02:24.465053   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0610 14:02:24.481607   26375 addons.go:420] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0610 14:02:24.481635   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0610 14:02:24.561899   26375 addons.go:420] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0610 14:02:24.561980   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0610 14:02:24.765270   26375 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0610 14:02:24.765346   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0610 14:02:24.767087   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 14:02:24.860342   26375 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0610 14:02:24.860367   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0610 14:02:24.861603   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0610 14:02:24.862494   26375 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0610 14:02:24.862516   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0610 14:02:24.961393   26375 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 14:02:24.961475   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0610 14:02:25.065191   26375 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0610 14:02:25.065219   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0610 14:02:25.271069   26375 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0610 14:02:25.271095   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0610 14:02:25.362791   26375 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0610 14:02:25.362814   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0610 14:02:25.461326   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 14:02:25.560630   26375 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.698109596s)
	I0610 14:02:25.560664   26375 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0610 14:02:25.565829   26375 addons.go:420] installing /etc/kubernetes/addons/ig-crd.yaml
	I0610 14:02:25.565850   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0610 14:02:25.580053   26375 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0610 14:02:25.580076   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0610 14:02:25.972396   26375 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0610 14:02:25.972423   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0610 14:02:25.975018   26375 addons.go:420] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0610 14:02:25.975042   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0610 14:02:26.081116   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0610 14:02:26.381308   26375 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0610 14:02:26.381402   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0610 14:02:26.761367   26375 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0610 14:02:26.761442   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0610 14:02:26.878012   26375 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 14:02:26.878082   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0610 14:02:27.065193   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 14:02:27.969865   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.808498709s)
	I0610 14:02:27.969956   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.806933362s)
	I0610 14:02:27.970015   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.797935912s)
	I0610 14:02:29.575374   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.398319655s)
	I0610 14:02:29.575410   26375 addons.go:464] Verifying addon ingress=true in "addons-060929"
	I0610 14:02:29.575440   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.308726723s)
	I0610 14:02:29.577249   26375 out.go:177] * Verifying ingress addon...
	I0610 14:02:29.575492   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.300597065s)
	I0610 14:02:29.575506   26375 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.295493125s)
	I0610 14:02:29.575575   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.808459799s)
	I0610 14:02:29.575637   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.714000592s)
	I0610 14:02:29.575727   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.114367416s)
	I0610 14:02:29.575796   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.494635666s)
	I0610 14:02:29.578805   26375 addons.go:464] Verifying addon metrics-server=true in "addons-060929"
	I0610 14:02:29.578811   26375 addons.go:464] Verifying addon registry=true in "addons-060929"
	I0610 14:02:29.580544   26375 out.go:177] * Verifying registry addon...
	W0610 14:02:29.578851   26375 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0610 14:02:29.579440   26375 node_ready.go:35] waiting up to 6m0s for node "addons-060929" to be "Ready" ...
	I0610 14:02:29.579485   26375 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0610 14:02:29.581980   26375 retry.go:31] will retry after 143.091538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0610 14:02:29.582734   26375 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0610 14:02:29.586300   26375 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0610 14:02:29.586315   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:29.586490   26375 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0610 14:02:29.586504   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:29.726064   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 14:02:30.090290   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:30.090484   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:30.268276   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.202982006s)
	I0610 14:02:30.268359   26375 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-060929"
	I0610 14:02:30.270413   26375 out.go:177] * Verifying csi-hostpath-driver addon...
	I0610 14:02:30.272861   26375 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0610 14:02:30.278168   26375 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 14:02:30.278192   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:30.565030   26375 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0610 14:02:30.565151   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:30.600881   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:30.664872   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:30.665149   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:30.862638   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:31.061609   26375 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0610 14:02:31.164820   26375 addons.go:228] Setting addon gcp-auth=true in "addons-060929"
	I0610 14:02:31.164871   26375 host.go:66] Checking if "addons-060929" exists ...
	I0610 14:02:31.165344   26375 cli_runner.go:164] Run: docker container inspect addons-060929 --format={{.State.Status}}
	I0610 14:02:31.166610   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:31.166867   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:31.193617   26375 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0610 14:02:31.193671   26375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-060929
	I0610 14:02:31.208152   26375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/addons-060929/id_rsa Username:docker}
	I0610 14:02:31.374507   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:31.669441   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:31.671149   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:31.672325   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:31.782957   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:32.165507   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:32.168288   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:32.287384   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:32.569277   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.843164242s)
	I0610 14:02:32.569417   26375 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.375767242s)
	I0610 14:02:32.571380   26375 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0610 14:02:32.573057   26375 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0610 14:02:32.574540   26375 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0610 14:02:32.574558   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0610 14:02:32.662362   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:32.663562   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:32.681622   26375 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0610 14:02:32.681645   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0610 14:02:32.779769   26375 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 14:02:32.779793   26375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0610 14:02:32.783756   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:32.873056   26375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 14:02:33.169755   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:33.170963   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:33.283497   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:33.664414   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:33.664760   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:33.782753   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:34.089191   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:34.162636   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:34.163008   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:34.283254   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:34.670117   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:34.670887   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:34.783598   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:35.183232   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:35.184963   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:35.365693   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:35.593102   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:35.662124   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:35.783581   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:36.090319   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:36.090500   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:36.091667   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:36.283340   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:36.579276   26375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.706091509s)
	I0610 14:02:36.580166   26375 addons.go:464] Verifying addon gcp-auth=true in "addons-060929"
	I0610 14:02:36.581942   26375 out.go:177] * Verifying gcp-auth addon...
	I0610 14:02:36.584405   26375 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0610 14:02:36.587076   26375 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0610 14:02:36.587097   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:36.589543   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:36.589572   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:36.782525   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:37.090009   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:37.090053   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:37.090195   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:37.282836   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:37.590059   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:37.590193   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:37.590451   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:37.782979   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:38.090076   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:38.090632   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:38.090784   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:38.282724   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:38.587820   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:38.589789   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:38.589872   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:38.592013   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:38.782444   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:39.090258   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:39.090607   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:39.093280   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:39.282719   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:39.589956   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:39.589976   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:39.590151   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:39.782615   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:40.090306   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:40.090307   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:40.090527   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:40.282658   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:40.588196   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:40.590265   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:40.590631   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:40.590792   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:40.782507   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:41.091971   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:41.092745   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:41.093007   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:41.282126   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:41.590839   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:41.590879   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:41.591217   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:41.782816   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:42.090583   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:42.091186   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:42.091186   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:42.282912   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:42.589868   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:42.589886   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:42.590152   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:42.782762   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:43.088258   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:43.090096   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:43.090217   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:43.090799   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:43.282510   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:43.589930   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:43.590126   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:43.590392   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:43.782419   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:44.090366   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:44.090907   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:44.091067   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:44.282922   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:44.590317   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:44.590345   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:44.590803   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:44.781559   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:45.088288   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:45.089993   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:45.090069   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:45.090149   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:45.282421   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:45.589774   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:45.589815   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:45.589822   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:45.782278   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:46.090055   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:46.090251   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:46.090364   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:46.281890   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:46.589643   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:46.589817   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:46.590021   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:46.782227   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:47.090291   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:47.090575   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:47.090636   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:47.281896   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:47.587344   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:47.590058   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:47.590321   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:47.590337   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:47.781548   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:48.089870   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:48.089871   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:48.090102   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:48.282450   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:48.591099   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:48.591137   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:48.591420   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:48.782566   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:49.089809   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:49.089949   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:49.089989   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:49.282376   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:49.587917   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:49.589846   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:49.589846   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:49.590004   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:49.782351   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:50.090134   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:50.090430   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:50.090591   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:50.281454   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:50.589815   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:50.589991   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:50.590011   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:50.782299   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:51.089875   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:51.090388   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:51.090482   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:51.281670   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:51.589580   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:51.589917   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:51.589986   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:51.782427   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:52.088385   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:52.091083   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:52.091137   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:52.091484   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:52.282565   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:52.589504   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:52.589758   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:52.589829   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:52.782230   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:53.089940   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:53.090280   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:53.090365   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:53.281685   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:53.589744   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:53.589918   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:53.590219   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:53.782329   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:54.089860   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:54.089891   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:54.089970   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:54.282475   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:54.588030   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:54.589857   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:54.589911   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:54.590091   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:54.783589   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:55.089901   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:55.090014   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:55.090116   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:55.282442   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:55.590349   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:55.590726   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:55.591056   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:55.781472   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:56.089593   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:56.089736   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:56.089849   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:56.282161   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:56.589862   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:56.590088   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:56.590426   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:56.782666   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:57.088118   26375 node_ready.go:58] node "addons-060929" has status "Ready":"False"
	I0610 14:02:57.089826   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:57.089989   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:57.090028   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:57.283155   26375 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 14:02:57.283174   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:57.588173   26375 node_ready.go:49] node "addons-060929" has status "Ready":"True"
	I0610 14:02:57.588197   26375 node_ready.go:38] duration metric: took 28.006213883s waiting for node "addons-060929" to be "Ready" ...
	I0610 14:02:57.588206   26375 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 14:02:57.590972   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:57.591607   26375 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0610 14:02:57.591625   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:57.592426   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:57.597055   26375 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-5ctmr" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:57.783270   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:58.093786   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:58.096102   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:58.096676   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:58.285859   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:58.591488   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:58.591568   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:58.591570   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:58.665230   26375 pod_ready.go:92] pod "coredns-5d78c9869d-5ctmr" in "kube-system" namespace has status "Ready":"True"
	I0610 14:02:58.665322   26375 pod_ready.go:81] duration metric: took 1.068233509s waiting for pod "coredns-5d78c9869d-5ctmr" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:58.665368   26375 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-060929" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:58.670064   26375 pod_ready.go:92] pod "etcd-addons-060929" in "kube-system" namespace has status "Ready":"True"
	I0610 14:02:58.670084   26375 pod_ready.go:81] duration metric: took 4.700239ms waiting for pod "etcd-addons-060929" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:58.670097   26375 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-060929" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:58.674459   26375 pod_ready.go:92] pod "kube-apiserver-addons-060929" in "kube-system" namespace has status "Ready":"True"
	I0610 14:02:58.674476   26375 pod_ready.go:81] duration metric: took 4.371789ms waiting for pod "kube-apiserver-addons-060929" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:58.674487   26375 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-060929" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:58.679412   26375 pod_ready.go:92] pod "kube-controller-manager-addons-060929" in "kube-system" namespace has status "Ready":"True"
	I0610 14:02:58.679430   26375 pod_ready.go:81] duration metric: took 4.932151ms waiting for pod "kube-controller-manager-addons-060929" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:58.679443   26375 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cf52j" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:58.783718   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:58.789090   26375 pod_ready.go:92] pod "kube-proxy-cf52j" in "kube-system" namespace has status "Ready":"True"
	I0610 14:02:58.789112   26375 pod_ready.go:81] duration metric: took 109.658058ms waiting for pod "kube-proxy-cf52j" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:58.789123   26375 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-060929" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:59.090984   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:59.091429   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:59.091596   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:59.189155   26375 pod_ready.go:92] pod "kube-scheduler-addons-060929" in "kube-system" namespace has status "Ready":"True"
	I0610 14:02:59.189175   26375 pod_ready.go:81] duration metric: took 400.043862ms waiting for pod "kube-scheduler-addons-060929" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:59.189184   26375 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-srvl6" in "kube-system" namespace to be "Ready" ...
	I0610 14:02:59.284097   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:02:59.591069   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:02:59.592321   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:02:59.592347   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:02:59.783408   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:00.090701   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:00.091465   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:00.091705   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:00.284069   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:00.591147   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:00.592020   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:00.592668   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:00.783363   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:01.091911   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:01.092704   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:01.093168   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:01.282738   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:01.590783   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:01.591834   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:01.591851   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:01.595305   26375 pod_ready.go:102] pod "metrics-server-844d8db974-srvl6" in "kube-system" namespace has status "Ready":"False"
	I0610 14:03:01.784183   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:02.090517   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:02.090596   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:02.090698   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:02.284006   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:02.591153   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:02.591763   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:02.591965   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:02.783152   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:03.090889   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:03.091541   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:03.091746   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:03.283980   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:03.590443   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:03.591121   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:03.591286   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:03.784386   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:04.090611   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:04.091241   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:04.091423   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:04.095201   26375 pod_ready.go:102] pod "metrics-server-844d8db974-srvl6" in "kube-system" namespace has status "Ready":"False"
	I0610 14:03:04.283722   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:04.590419   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:04.590895   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:04.591077   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:04.783585   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:05.090710   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:05.091324   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:05.091399   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:05.284042   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:05.590872   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:05.591370   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:05.591385   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:05.783541   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:06.090591   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:06.090907   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:06.090932   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:06.283999   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:06.590665   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:06.591169   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:06.591267   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:06.594415   26375 pod_ready.go:102] pod "metrics-server-844d8db974-srvl6" in "kube-system" namespace has status "Ready":"False"
	I0610 14:03:06.783131   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:07.090221   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:07.091165   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:07.091183   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:07.284528   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:07.673139   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:07.674365   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:07.674736   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:07.783830   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:08.091478   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:08.091703   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:08.092135   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:08.283880   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:08.591511   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:08.592255   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:08.592520   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:08.597099   26375 pod_ready.go:102] pod "metrics-server-844d8db974-srvl6" in "kube-system" namespace has status "Ready":"False"
	I0610 14:03:08.783360   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:09.092812   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:09.093171   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:09.093524   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:09.283127   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:09.590465   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:09.591238   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:09.591287   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:09.784179   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:10.090280   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:10.091126   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:10.091195   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:10.285095   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:10.591810   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:10.592481   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:10.592503   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:10.784631   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:11.090723   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:11.091214   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:11.091359   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:11.094355   26375 pod_ready.go:102] pod "metrics-server-844d8db974-srvl6" in "kube-system" namespace has status "Ready":"False"
	I0610 14:03:11.282935   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:11.591134   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:11.591688   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:11.591945   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:11.784603   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:12.128673   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:12.129692   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:12.130916   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:12.283415   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:12.592955   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:12.593669   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:12.593734   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:12.783318   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:13.110523   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:13.111084   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:13.111435   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:13.112387   26375 pod_ready.go:102] pod "metrics-server-844d8db974-srvl6" in "kube-system" namespace has status "Ready":"False"
	I0610 14:03:13.283012   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:13.590636   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:13.591194   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:13.591251   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:13.783184   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:14.090945   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:14.092642   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 14:03:14.093062   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:14.283449   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:14.589995   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:14.591115   26375 kapi.go:107] duration metric: took 45.008380189s to wait for kubernetes.io/minikube-addons=registry ...
	I0610 14:03:14.591314   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:14.784653   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:15.090614   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:15.090656   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:15.283841   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:15.590574   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:15.591054   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:15.595304   26375 pod_ready.go:102] pod "metrics-server-844d8db974-srvl6" in "kube-system" namespace has status "Ready":"False"
	I0610 14:03:15.783702   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:16.090436   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:16.091022   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:16.282773   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:16.590790   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:16.593362   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:16.597550   26375 pod_ready.go:92] pod "metrics-server-844d8db974-srvl6" in "kube-system" namespace has status "Ready":"True"
	I0610 14:03:16.597570   26375 pod_ready.go:81] duration metric: took 17.408380503s waiting for pod "metrics-server-844d8db974-srvl6" in "kube-system" namespace to be "Ready" ...
	I0610 14:03:16.597594   26375 pod_ready.go:38] duration metric: took 19.009358484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 14:03:16.597617   26375 api_server.go:52] waiting for apiserver process to appear ...
	I0610 14:03:16.597664   26375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 14:03:16.609825   26375 api_server.go:72] duration metric: took 52.333939681s to wait for apiserver process to appear ...
	I0610 14:03:16.609846   26375 api_server.go:88] waiting for apiserver healthz status ...
	I0610 14:03:16.609864   26375 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0610 14:03:16.615634   26375 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0610 14:03:16.616576   26375 api_server.go:141] control plane version: v1.27.2
	I0610 14:03:16.616600   26375 api_server.go:131] duration metric: took 6.74656ms to wait for apiserver health ...
	I0610 14:03:16.616608   26375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 14:03:16.625711   26375 system_pods.go:59] 18 kube-system pods found
	I0610 14:03:16.625737   26375 system_pods.go:61] "coredns-5d78c9869d-5ctmr" [be7a9863-6626-4f8c-82a5-4816e06ade3d] Running
	I0610 14:03:16.625747   26375 system_pods.go:61] "csi-hostpath-attacher-0" [aeb63610-44e1-4b61-ba60-e7493cd9af3e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 14:03:16.625757   26375 system_pods.go:61] "csi-hostpath-resizer-0" [e6eac0be-aeee-42fb-834b-9f62b2f5a108] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 14:03:16.625768   26375 system_pods.go:61] "csi-hostpathplugin-9zmgg" [601c184c-4e4b-48a7-8734-a7287391b531] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 14:03:16.625776   26375 system_pods.go:61] "etcd-addons-060929" [b10bc95e-a46c-4adc-90a6-4cb743026f61] Running
	I0610 14:03:16.625787   26375 system_pods.go:61] "kindnet-x679s" [2ae1d9a1-943e-4f02-9251-1e267cee4d0e] Running
	I0610 14:03:16.625795   26375 system_pods.go:61] "kube-apiserver-addons-060929" [447fbbb3-e7e0-42fd-8a15-dcedce2514e8] Running
	I0610 14:03:16.625815   26375 system_pods.go:61] "kube-controller-manager-addons-060929" [6a7f0dce-a5b8-4a02-9cfa-9a90e82b3256] Running
	I0610 14:03:16.625822   26375 system_pods.go:61] "kube-ingress-dns-minikube" [2c77456e-199d-463d-afda-a83abf1d090f] Running
	I0610 14:03:16.625833   26375 system_pods.go:61] "kube-proxy-cf52j" [bb95cd4d-82ce-41a4-b8fc-ce3144d37cf2] Running
	I0610 14:03:16.625842   26375 system_pods.go:61] "kube-scheduler-addons-060929" [b79c7756-670f-4cb9-837b-7084c90b50d9] Running
	I0610 14:03:16.625852   26375 system_pods.go:61] "metrics-server-844d8db974-srvl6" [ac754de1-7b0e-4135-8476-229f6fab6e23] Running
	I0610 14:03:16.625859   26375 system_pods.go:61] "registry-d9t4w" [e1c46cbb-c6f4-4d60-b97f-8ca3827c1901] Running
	I0610 14:03:16.625870   26375 system_pods.go:61] "registry-proxy-nnzpl" [02e1f946-92ca-4b6a-b967-b7f201fbbfd3] Running
	I0610 14:03:16.625882   26375 system_pods.go:61] "snapshot-controller-75bbb956b9-7vq9b" [a01a9a34-f096-4dc4-a984-3023b7779531] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 14:03:16.625893   26375 system_pods.go:61] "snapshot-controller-75bbb956b9-ff55s" [f3b7b05e-2394-4669-9da0-72efb734e55d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 14:03:16.625905   26375 system_pods.go:61] "storage-provisioner" [91e75303-d59a-48f7-af80-749302e04bf2] Running
	I0610 14:03:16.625916   26375 system_pods.go:61] "tiller-deploy-6847666dc-c5kmk" [53bf6ef3-aa47-441c-84eb-b4b76668ae56] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0610 14:03:16.625926   26375 system_pods.go:74] duration metric: took 9.310937ms to wait for pod list to return data ...
	I0610 14:03:16.625936   26375 default_sa.go:34] waiting for default service account to be created ...
	I0610 14:03:16.671138   26375 default_sa.go:45] found service account: "default"
	I0610 14:03:16.671161   26375 default_sa.go:55] duration metric: took 45.214994ms for default service account to be created ...
	I0610 14:03:16.671170   26375 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 14:03:16.682849   26375 system_pods.go:86] 18 kube-system pods found
	I0610 14:03:16.682879   26375 system_pods.go:89] "coredns-5d78c9869d-5ctmr" [be7a9863-6626-4f8c-82a5-4816e06ade3d] Running
	I0610 14:03:16.682893   26375 system_pods.go:89] "csi-hostpath-attacher-0" [aeb63610-44e1-4b61-ba60-e7493cd9af3e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 14:03:16.682906   26375 system_pods.go:89] "csi-hostpath-resizer-0" [e6eac0be-aeee-42fb-834b-9f62b2f5a108] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0610 14:03:16.682918   26375 system_pods.go:89] "csi-hostpathplugin-9zmgg" [601c184c-4e4b-48a7-8734-a7287391b531] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 14:03:16.682930   26375 system_pods.go:89] "etcd-addons-060929" [b10bc95e-a46c-4adc-90a6-4cb743026f61] Running
	I0610 14:03:16.682938   26375 system_pods.go:89] "kindnet-x679s" [2ae1d9a1-943e-4f02-9251-1e267cee4d0e] Running
	I0610 14:03:16.682955   26375 system_pods.go:89] "kube-apiserver-addons-060929" [447fbbb3-e7e0-42fd-8a15-dcedce2514e8] Running
	I0610 14:03:16.682964   26375 system_pods.go:89] "kube-controller-manager-addons-060929" [6a7f0dce-a5b8-4a02-9cfa-9a90e82b3256] Running
	I0610 14:03:16.682977   26375 system_pods.go:89] "kube-ingress-dns-minikube" [2c77456e-199d-463d-afda-a83abf1d090f] Running
	I0610 14:03:16.682984   26375 system_pods.go:89] "kube-proxy-cf52j" [bb95cd4d-82ce-41a4-b8fc-ce3144d37cf2] Running
	I0610 14:03:16.682990   26375 system_pods.go:89] "kube-scheduler-addons-060929" [b79c7756-670f-4cb9-837b-7084c90b50d9] Running
	I0610 14:03:16.682998   26375 system_pods.go:89] "metrics-server-844d8db974-srvl6" [ac754de1-7b0e-4135-8476-229f6fab6e23] Running
	I0610 14:03:16.683007   26375 system_pods.go:89] "registry-d9t4w" [e1c46cbb-c6f4-4d60-b97f-8ca3827c1901] Running
	I0610 14:03:16.683015   26375 system_pods.go:89] "registry-proxy-nnzpl" [02e1f946-92ca-4b6a-b967-b7f201fbbfd3] Running
	I0610 14:03:16.683027   26375 system_pods.go:89] "snapshot-controller-75bbb956b9-7vq9b" [a01a9a34-f096-4dc4-a984-3023b7779531] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 14:03:16.683040   26375 system_pods.go:89] "snapshot-controller-75bbb956b9-ff55s" [f3b7b05e-2394-4669-9da0-72efb734e55d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 14:03:16.683053   26375 system_pods.go:89] "storage-provisioner" [91e75303-d59a-48f7-af80-749302e04bf2] Running
	I0610 14:03:16.683063   26375 system_pods.go:89] "tiller-deploy-6847666dc-c5kmk" [53bf6ef3-aa47-441c-84eb-b4b76668ae56] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0610 14:03:16.683098   26375 system_pods.go:126] duration metric: took 11.921356ms to wait for k8s-apps to be running ...
	I0610 14:03:16.683112   26375 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 14:03:16.683157   26375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 14:03:16.764281   26375 system_svc.go:56] duration metric: took 81.164114ms WaitForService to wait for kubelet.
	I0610 14:03:16.764303   26375 kubeadm.go:581] duration metric: took 52.488424211s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 14:03:16.764322   26375 node_conditions.go:102] verifying NodePressure condition ...
	I0610 14:03:16.767176   26375 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0610 14:03:16.767195   26375 node_conditions.go:123] node cpu capacity is 8
	I0610 14:03:16.767207   26375 node_conditions.go:105] duration metric: took 2.880948ms to run NodePressure ...
	I0610 14:03:16.767216   26375 start.go:228] waiting for startup goroutines ...
	I0610 14:03:16.785115   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:17.090124   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:17.090670   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:17.283775   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:17.590585   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:17.591286   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:17.783314   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:18.090481   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:18.090901   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:18.282713   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:18.665246   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:18.665712   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:18.784313   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:19.090771   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:19.091519   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:19.284471   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:19.589946   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:19.590478   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:19.783821   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:20.090904   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:20.091084   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:20.283560   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:20.592339   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:20.592733   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:20.784313   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:21.090252   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:21.090930   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:21.283496   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:21.590393   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:21.591237   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:21.784135   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:22.090555   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:22.091263   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:22.282650   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:22.590150   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:22.592300   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:22.785948   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:23.090143   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:23.090697   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:23.283867   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:23.590518   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:23.591109   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:23.783431   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:24.091303   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:24.091655   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:24.284122   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:24.590694   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 14:03:24.591292   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:24.783333   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:25.090026   26375 kapi.go:107] duration metric: took 48.505619945s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0610 14:03:25.092379   26375 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-060929 cluster.
	I0610 14:03:25.090086   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:25.095817   26375 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0610 14:03:25.097419   26375 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0610 14:03:25.283785   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:25.590583   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:25.783293   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:26.090225   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:26.285209   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:26.590908   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:26.784022   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:27.091134   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:27.282795   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:27.590978   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:27.782724   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:28.090634   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:28.283063   26375 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 14:03:28.590381   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:28.782828   26375 kapi.go:107] duration metric: took 58.509967652s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0610 14:03:29.090440   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:29.590800   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:30.166912   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:30.591488   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:31.164720   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:31.591305   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:32.090376   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:32.591537   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:33.090824   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:33.590672   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:34.091427   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:34.595574   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:35.091293   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:35.590425   26375 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 14:03:36.090338   26375 kapi.go:107] duration metric: took 1m6.510850424s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0610 14:03:36.092596   26375 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, default-storageclass, ingress-dns, inspektor-gadget, helm-tiller, metrics-server, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0610 14:03:36.094350   26375 addons.go:499] enable addons completed in 1m12.458355926s: enabled=[storage-provisioner cloud-spanner default-storageclass ingress-dns inspektor-gadget helm-tiller metrics-server volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0610 14:03:36.094387   26375 start.go:233] waiting for cluster config update ...
	I0610 14:03:36.094403   26375 start.go:242] writing updated cluster config ...
	I0610 14:03:36.094637   26375 ssh_runner.go:195] Run: rm -f paused
	I0610 14:03:36.140261   26375 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0610 14:03:36.142586   26375 out.go:177] * Done! kubectl is now configured to use "addons-060929" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jun 10 14:06:11 addons-060929 crio[950]: time="2023-06-10 14:06:11.076994314Z" level=info msg="Removing container: a0f35bc070c9ec90d10322748752c8f3a77ad6157d637ee8db42e0f3cfd45d17" id=0cef3dc3-4fd4-4528-9ba0-71646e62c8b7 name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 10 14:06:11 addons-060929 crio[950]: time="2023-06-10 14:06:11.096710248Z" level=info msg="Removed container a0f35bc070c9ec90d10322748752c8f3a77ad6157d637ee8db42e0f3cfd45d17: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=0cef3dc3-4fd4-4528-9ba0-71646e62c8b7 name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 10 14:06:11 addons-060929 crio[950]: time="2023-06-10 14:06:11.542285168Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea" id=b1bd4bf4-140b-4901-8f1b-46328370c62c name=/runtime.v1.ImageService/PullImage
	Jun 10 14:06:11 addons-060929 crio[950]: time="2023-06-10 14:06:11.543080612Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=44495f07-b2a3-481c-97f2-5be0c04a0fb3 name=/runtime.v1.ImageService/ImageStatus
	Jun 10 14:06:11 addons-060929 crio[950]: time="2023-06-10 14:06:11.543866533Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=44495f07-b2a3-481c-97f2-5be0c04a0fb3 name=/runtime.v1.ImageService/ImageStatus
	Jun 10 14:06:11 addons-060929 crio[950]: time="2023-06-10 14:06:11.544711964Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-rjfvj/hello-world-app" id=f178dc9d-6733-44fa-b327-662be62aa774 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 10 14:06:11 addons-060929 crio[950]: time="2023-06-10 14:06:11.544814789Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 10 14:06:11 addons-060929 crio[950]: time="2023-06-10 14:06:11.618883320Z" level=info msg="Created container 0f7b4c012a57a975d54a231a829ee118c666fc67d86f8062b6cc985bad1335c5: default/hello-world-app-65bdb79f98-rjfvj/hello-world-app" id=f178dc9d-6733-44fa-b327-662be62aa774 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 10 14:06:11 addons-060929 crio[950]: time="2023-06-10 14:06:11.619429809Z" level=info msg="Starting container: 0f7b4c012a57a975d54a231a829ee118c666fc67d86f8062b6cc985bad1335c5" id=29f9aac9-d9f5-4670-969a-8895330f7983 name=/runtime.v1.RuntimeService/StartContainer
	Jun 10 14:06:11 addons-060929 crio[950]: time="2023-06-10 14:06:11.630351585Z" level=info msg="Started container" PID=9401 containerID=0f7b4c012a57a975d54a231a829ee118c666fc67d86f8062b6cc985bad1335c5 description=default/hello-world-app-65bdb79f98-rjfvj/hello-world-app id=29f9aac9-d9f5-4670-969a-8895330f7983 name=/runtime.v1.RuntimeService/StartContainer sandboxID=36ed711ab168ec496cf67c15a5437101e98a034b1f9a416226a233b756244908
	Jun 10 14:06:11 addons-060929 crio[950]: time="2023-06-10 14:06:11.765704239Z" level=info msg="Stopping container: 639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470 (timeout: 1s)" id=6ded0d47-98ea-43fb-aa5b-73c0316b09cb name=/runtime.v1.RuntimeService/StopContainer
	Jun 10 14:06:12 addons-060929 crio[950]: time="2023-06-10 14:06:12.775870122Z" level=warning msg="Stopping container 639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470 with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=6ded0d47-98ea-43fb-aa5b-73c0316b09cb name=/runtime.v1.RuntimeService/StopContainer
	Jun 10 14:06:12 addons-060929 conmon[6125]: conmon 639aa78acef1e8434385 <ninfo>: container 6137 exited with status 137
	Jun 10 14:06:12 addons-060929 crio[950]: time="2023-06-10 14:06:12.920412488Z" level=info msg="Stopped container 639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470: ingress-nginx/ingress-nginx-controller-7b4698b8c7-wf52k/controller" id=6ded0d47-98ea-43fb-aa5b-73c0316b09cb name=/runtime.v1.RuntimeService/StopContainer
	Jun 10 14:06:12 addons-060929 crio[950]: time="2023-06-10 14:06:12.920974713Z" level=info msg="Stopping pod sandbox: 070930c56d2038f6f4be31d2f256e8e31843eb5dd4bcf24625d8bf1843a8ace0" id=aeb67668-fac1-459f-b13b-6bca4b95de0d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 10 14:06:12 addons-060929 crio[950]: time="2023-06-10 14:06:12.924271030Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-I7OO5D4LWZVMWKON - [0:0]\n:KUBE-HP-MYIL5BYO57GTEGYM - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-MYIL5BYO57GTEGYM\n-X KUBE-HP-I7OO5D4LWZVMWKON\nCOMMIT\n"
	Jun 10 14:06:12 addons-060929 crio[950]: time="2023-06-10 14:06:12.925611525Z" level=info msg="Closing host port tcp:80"
	Jun 10 14:06:12 addons-060929 crio[950]: time="2023-06-10 14:06:12.925650169Z" level=info msg="Closing host port tcp:443"
	Jun 10 14:06:12 addons-060929 crio[950]: time="2023-06-10 14:06:12.926952707Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jun 10 14:06:12 addons-060929 crio[950]: time="2023-06-10 14:06:12.926970167Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jun 10 14:06:12 addons-060929 crio[950]: time="2023-06-10 14:06:12.927085839Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7b4698b8c7-wf52k Namespace:ingress-nginx ID:070930c56d2038f6f4be31d2f256e8e31843eb5dd4bcf24625d8bf1843a8ace0 UID:b81896b1-2d41-4266-924c-552b7d22c52a NetNS:/var/run/netns/717bc8af-4663-40a5-88cb-dc113922d175 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 10 14:06:12 addons-060929 crio[950]: time="2023-06-10 14:06:12.927195439Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7b4698b8c7-wf52k from CNI network \"kindnet\" (type=ptp)"
	Jun 10 14:06:12 addons-060929 crio[950]: time="2023-06-10 14:06:12.959499496Z" level=info msg="Stopped pod sandbox: 070930c56d2038f6f4be31d2f256e8e31843eb5dd4bcf24625d8bf1843a8ace0" id=aeb67668-fac1-459f-b13b-6bca4b95de0d name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 10 14:06:13 addons-060929 crio[950]: time="2023-06-10 14:06:13.084026584Z" level=info msg="Removing container: 639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470" id=214b319e-e607-45d0-a75b-70d89f60f557 name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 10 14:06:13 addons-060929 crio[950]: time="2023-06-10 14:06:13.098987767Z" level=info msg="Removed container 639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470: ingress-nginx/ingress-nginx-controller-7b4698b8c7-wf52k/controller" id=214b319e-e607-45d0-a75b-70d89f60f557 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0f7b4c012a57a       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      8 seconds ago       Running             hello-world-app           0                   36ed711ab168e       hello-world-app-65bdb79f98-rjfvj
	d77a1455869c3       docker.io/library/nginx@sha256:0b0af14a00ea0e4fd9b09e77d2b89b71b5c5a97f9aa073553f355415bc34ae33                              2 minutes ago       Running             nginx                     0                   9634170bfe2b5       nginx
	cae937485319a       7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0                                                             2 minutes ago       Exited              patch                     3                   23f946d00fd0f       ingress-nginx-admission-patch-pdxvn
	48d8070474cdf       ghcr.io/headlamp-k8s/headlamp@sha256:553bbb3a9a8fa54877d672bd8362248bf63776b684817a7a9a2b39a69acd6846                        2 minutes ago       Running             headlamp                  0                   caad352e50867       headlamp-6b5756787-c42vq
	a4281ccfaec63       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   b9015365304fa       gcp-auth-58478865f7-mh9lf
	c684b63f03137       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   2 minutes ago       Exited              create                    0                   57caeee77adb1       ingress-nginx-admission-create-vnx47
	cd4e652bdf6a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   ddb71dedaf462       storage-provisioner
	b80ad10f29744       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   4c1e4758db8d7       coredns-5d78c9869d-5ctmr
	c9460c846dcdc       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                                             3 minutes ago       Running             kindnet-cni               0                   caa50fd72c0bb       kindnet-x679s
	a1fe0da46a01b       b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee                                                             3 minutes ago       Running             kube-proxy                0                   507f3957b8246       kube-proxy-cf52j
	9be9b0422908c       c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370                                                             4 minutes ago       Running             kube-apiserver            0                   7a07ae53748c2       kube-apiserver-addons-060929
	dbfb3628b886e       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   a841176fa0a06       etcd-addons-060929
	4976d4285f0a5       ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12                                                             4 minutes ago       Running             kube-controller-manager   0                   febf76cc69dbc       kube-controller-manager-addons-060929
	d45567f6616f0       89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0                                                             4 minutes ago       Running             kube-scheduler            0                   f7c5fe8b119e9       kube-scheduler-addons-060929
	
	* 
	* ==> coredns [b80ad10f29744d8751e2df66ec407a2f1046932996d4051d22c02831ad37d5d9] <==
	* [INFO] 10.244.0.4:57912 - 58703 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072908s
	[INFO] 10.244.0.4:52083 - 44020 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004261587s
	[INFO] 10.244.0.4:52083 - 47347 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004484467s
	[INFO] 10.244.0.4:48831 - 33593 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006726889s
	[INFO] 10.244.0.4:48831 - 39990 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007033284s
	[INFO] 10.244.0.4:39438 - 62789 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005233177s
	[INFO] 10.244.0.4:39438 - 63552 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006244108s
	[INFO] 10.244.0.4:34135 - 40006 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000060354s
	[INFO] 10.244.0.4:34135 - 31812 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008122s
	[INFO] 10.244.0.17:55846 - 30770 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000166335s
	[INFO] 10.244.0.17:33339 - 11213 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000224261s
	[INFO] 10.244.0.17:46290 - 1602 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000090105s
	[INFO] 10.244.0.17:36343 - 47830 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113938s
	[INFO] 10.244.0.17:35731 - 7348 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101279s
	[INFO] 10.244.0.17:41876 - 26181 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134413s
	[INFO] 10.244.0.17:47571 - 19258 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005255031s
	[INFO] 10.244.0.17:39813 - 23741 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.011473411s
	[INFO] 10.244.0.17:40266 - 26542 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005944497s
	[INFO] 10.244.0.17:60834 - 63478 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008015431s
	[INFO] 10.244.0.17:55291 - 60848 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005163004s
	[INFO] 10.244.0.17:49078 - 29100 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006553325s
	[INFO] 10.244.0.17:47957 - 33974 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00060292s
	[INFO] 10.244.0.17:41021 - 50517 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000720888s
	[INFO] 10.244.0.20:60824 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000181734s
	[INFO] 10.244.0.20:46989 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141042s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-060929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-060929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3273891fc7fc0f39c65075197baa2d52fc489f6f
	                    minikube.k8s.io/name=addons-060929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T14_02_10_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-060929
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 14:02:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-060929
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 14:06:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 14:04:13 +0000   Sat, 10 Jun 2023 14:02:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 14:04:13 +0000   Sat, 10 Jun 2023 14:02:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 14:04:13 +0000   Sat, 10 Jun 2023 14:02:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 14:04:13 +0000   Sat, 10 Jun 2023 14:02:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-060929
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a66268d382c4d7dbd2f2cac99046965
	  System UUID:                8a245377-53bd-460b-943b-71d4359a2a1d
	  Boot ID:                    e810f687-8f99-49aa-a9be-3ee9974bdd8c
	  Kernel Version:             5.15.0-1035-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-rjfvj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  gcp-auth                    gcp-auth-58478865f7-mh9lf                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  headlamp                    headlamp-6b5756787-c42vq                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 coredns-5d78c9869d-5ctmr                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m56s
	  kube-system                 etcd-addons-060929                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-x679s                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m56s
	  kube-system                 kube-apiserver-addons-060929             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-addons-060929    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-cf52j                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-scheduler-addons-060929             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m15s (x8 over 4m15s)  kubelet          Node addons-060929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s (x8 over 4m15s)  kubelet          Node addons-060929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s (x8 over 4m15s)  kubelet          Node addons-060929 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet          Node addons-060929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet          Node addons-060929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet          Node addons-060929 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m57s                  node-controller  Node addons-060929 event: Registered Node addons-060929 in Controller
	  Normal  NodeReady                3m22s                  kubelet          Node addons-060929 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.008078] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003327] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000824] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000742] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000688] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000785] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000745] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000731] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001661] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001634] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.123264] kauditd_printk_skb: 34 callbacks suppressed
	[Jun10 14:03] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 53 92 9b e0 01 ca 27 ee f5 e3 17 08 00
	[Jun10 14:04] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 96 53 92 9b e0 01 ca 27 ee f5 e3 17 08 00
	[  +2.015740] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 96 53 92 9b e0 01 ca 27 ee f5 e3 17 08 00
	[  +4.155549] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 53 92 9b e0 01 ca 27 ee f5 e3 17 08 00
	[  +8.191129] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 53 92 9b e0 01 ca 27 ee f5 e3 17 08 00
	[ +16.126280] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 96 53 92 9b e0 01 ca 27 ee f5 e3 17 08 00
	[Jun10 14:05] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 53 92 9b e0 01 ca 27 ee f5 e3 17 08 00
	
	* 
	* ==> etcd [dbfb3628b886e25364e276e5b089b015bf6f2746cbdb1c6224f45e02390adc8e] <==
	* {"level":"info","ts":"2023-06-10T14:02:05.386Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-060929 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T14:02:05.386Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T14:02:05.386Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T14:02:05.386Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T14:02:05.386Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T14:02:05.386Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:02:05.386Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:02:05.386Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:02:05.387Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-06-10T14:02:05.387Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T14:02:26.279Z","caller":"traceutil/trace.go:171","msg":"trace[1418498867] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"107.201658ms","start":"2023-06-10T14:02:26.171Z","end":"2023-06-10T14:02:26.278Z","steps":["trace[1418498867] 'process raft request'  (duration: 107.102157ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-10T14:02:26.279Z","caller":"traceutil/trace.go:171","msg":"trace[567179707] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"100.679638ms","start":"2023-06-10T14:02:26.179Z","end":"2023-06-10T14:02:26.279Z","steps":["trace[567179707] 'process raft request'  (duration: 100.355365ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-10T14:02:26.572Z","caller":"traceutil/trace.go:171","msg":"trace[1218931947] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"194.79096ms","start":"2023-06-10T14:02:26.377Z","end":"2023-06-10T14:02:26.572Z","steps":["trace[1218931947] 'process raft request'  (duration: 194.750201ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-10T14:02:26.572Z","caller":"traceutil/trace.go:171","msg":"trace[773069724] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"195.810065ms","start":"2023-06-10T14:02:26.376Z","end":"2023-06-10T14:02:26.572Z","steps":["trace[773069724] 'process raft request'  (duration: 105.502612ms)","trace[773069724] 'compare'  (duration: 89.69364ms)"],"step_count":2}
	{"level":"warn","ts":"2023-06-10T14:02:27.675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.539868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replication-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2023-06-10T14:02:27.676Z","caller":"traceutil/trace.go:171","msg":"trace[1441003569] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replication-controller; range_end:; response_count:1; response_revision:463; }","duration":"105.113334ms","start":"2023-06-10T14:02:27.570Z","end":"2023-06-10T14:02:27.676Z","steps":["trace[1441003569] 'range keys from in-memory index tree'  (duration: 100.48144ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-10T14:02:27.676Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.394482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-10T14:02:27.677Z","caller":"traceutil/trace.go:171","msg":"trace[1166057205] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:463; }","duration":"106.478049ms","start":"2023-06-10T14:02:27.570Z","end":"2023-06-10T14:02:27.676Z","steps":["trace[1166057205] 'range keys from in-memory index tree'  (duration: 106.255732ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-10T14:02:27.766Z","caller":"traceutil/trace.go:171","msg":"trace[2048235556] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"195.320388ms","start":"2023-06-10T14:02:27.571Z","end":"2023-06-10T14:02:27.766Z","steps":["trace[2048235556] 'process raft request'  (duration: 189.743719ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-10T14:02:27.862Z","caller":"traceutil/trace.go:171","msg":"trace[1861931656] linearizableReadLoop","detail":"{readStateIndex:474; appliedIndex:472; }","duration":"188.050678ms","start":"2023-06-10T14:02:27.674Z","end":"2023-06-10T14:02:27.862Z","steps":["trace[1861931656] 'read index received'  (duration: 86.644089ms)","trace[1861931656] 'applied index is now lower than readState.Index'  (duration: 101.405795ms)"],"step_count":2}
	{"level":"info","ts":"2023-06-10T14:02:27.863Z","caller":"traceutil/trace.go:171","msg":"trace[1709570894] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"188.454596ms","start":"2023-06-10T14:02:27.674Z","end":"2023-06-10T14:02:27.863Z","steps":["trace[1709570894] 'process raft request'  (duration: 187.951267ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-10T14:02:27.863Z","caller":"traceutil/trace.go:171","msg":"trace[82406328] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"188.463309ms","start":"2023-06-10T14:02:27.674Z","end":"2023-06-10T14:02:27.863Z","steps":["trace[82406328] 'process raft request'  (duration: 187.836155ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-10T14:02:27.865Z","caller":"traceutil/trace.go:171","msg":"trace[1191242410] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"188.79513ms","start":"2023-06-10T14:02:27.675Z","end":"2023-06-10T14:02:27.863Z","steps":["trace[1191242410] 'process raft request'  (duration: 187.847278ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-10T14:02:27.872Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.333174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:1 size:4640"}
	{"level":"info","ts":"2023-06-10T14:02:27.874Z","caller":"traceutil/trace.go:171","msg":"trace[1559926213] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:1; response_revision:467; }","duration":"198.67423ms","start":"2023-06-10T14:02:27.674Z","end":"2023-06-10T14:02:27.873Z","steps":["trace[1559926213] 'agreement among raft nodes before linearized reading'  (duration: 188.968831ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [a4281ccfaec633a9f74a93bc69a6d79a9d4f30360ca656b7a38536c95965550e] <==
	* 2023/06/10 14:03:23 GCP Auth Webhook started!
	2023/06/10 14:03:37 Ready to marshal response ...
	2023/06/10 14:03:37 Ready to write response ...
	2023/06/10 14:03:37 Ready to marshal response ...
	2023/06/10 14:03:37 Ready to write response ...
	2023/06/10 14:03:37 Ready to marshal response ...
	2023/06/10 14:03:37 Ready to write response ...
	2023/06/10 14:03:46 Ready to marshal response ...
	2023/06/10 14:03:46 Ready to write response ...
	2023/06/10 14:03:46 Ready to marshal response ...
	2023/06/10 14:03:46 Ready to write response ...
	2023/06/10 14:03:51 Ready to marshal response ...
	2023/06/10 14:03:51 Ready to write response ...
	2023/06/10 14:04:02 Ready to marshal response ...
	2023/06/10 14:04:02 Ready to write response ...
	2023/06/10 14:04:16 Ready to marshal response ...
	2023/06/10 14:04:16 Ready to write response ...
	2023/06/10 14:06:10 Ready to marshal response ...
	2023/06/10 14:06:10 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  14:06:19 up  1:48,  0 users,  load average: 0.46, 0.84, 0.44
	Linux addons-060929 5.15.0-1035-gcp #43~20.04.1-Ubuntu SMP Mon May 22 16:49:11 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [c9460c846dcdc51cd5e8e168d241dd8b40732f1cbd2e7422cdc786f03097d24e] <==
	* I0610 14:04:16.882757       1 main.go:227] handling current node
	I0610 14:04:26.885716       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:04:26.885736       1 main.go:227] handling current node
	I0610 14:04:36.897567       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:04:36.897600       1 main.go:227] handling current node
	I0610 14:04:46.909274       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:04:46.909296       1 main.go:227] handling current node
	I0610 14:04:56.913148       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:04:56.913171       1 main.go:227] handling current node
	I0610 14:05:06.925354       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:05:06.925376       1 main.go:227] handling current node
	I0610 14:05:16.929398       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:05:16.929420       1 main.go:227] handling current node
	I0610 14:05:26.941800       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:05:26.941829       1 main.go:227] handling current node
	I0610 14:05:36.945319       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:05:36.945342       1 main.go:227] handling current node
	I0610 14:05:46.950075       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:05:46.950098       1 main.go:227] handling current node
	I0610 14:05:56.953801       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:05:56.953824       1 main.go:227] handling current node
	I0610 14:06:06.957453       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:06:06.957475       1 main.go:227] handling current node
	I0610 14:06:16.969666       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:06:16.969689       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [9be9b0422908c5d8937380132d937137307f65151518b1a692015a7cbb3ba9e1] <==
	* I0610 14:04:30.988537       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 14:04:30.988582       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 14:04:30.999217       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 14:04:30.999300       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 14:04:31.002037       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 14:04:31.002138       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 14:04:31.004080       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 14:04:31.004118       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 14:04:31.013721       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 14:04:31.013773       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 14:04:31.015532       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 14:04:31.015635       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 14:04:31.024793       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 14:04:31.024832       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 14:04:31.060506       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 14:04:31.060542       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0610 14:04:32.004373       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0610 14:04:32.060849       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0610 14:04:32.073697       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0610 14:05:17.535755       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0610 14:05:17.535784       1 handler_proxy.go:100] no RequestInfo found in the context
	E0610 14:05:17.535815       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 14:05:17.535822       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0610 14:06:10.328505       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.101.90.171]
	
	* 
	* ==> kube-controller-manager [4976d4285f0a5a7a295b7c1a26e7a052cd1b485c2c61441dd7df0132cff0d6d3] <==
	* I0610 14:04:53.459110       1 shared_informer.go:318] Caches are synced for garbage collector
	W0610 14:04:54.354922       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 14:04:54.354950       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 14:05:04.583214       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 14:05:04.583243       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 14:05:07.764376       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 14:05:07.764403       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 14:05:09.945973       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 14:05:09.946005       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 14:05:19.153938       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 14:05:19.153970       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 14:05:39.498822       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 14:05:39.498854       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 14:05:53.230662       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 14:05:53.230691       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 14:05:53.638688       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 14:05:53.638717       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 14:06:09.638610       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 14:06:09.638636       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0610 14:06:10.179784       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0610 14:06:10.190456       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-rjfvj"
	I0610 14:06:11.744842       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0610 14:06:11.750010       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0610 14:06:16.518176       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 14:06:16.518231       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [a1fe0da46a01bb32582a94dfd9bc819ca9d91f6eec8a904d2e6eaa7847e2a592] <==
	* I0610 14:02:27.081914       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0610 14:02:27.082002       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0610 14:02:27.082030       1 server_others.go:551] "Using iptables proxy"
	I0610 14:02:27.477705       1 server_others.go:190] "Using iptables Proxier"
	I0610 14:02:27.477792       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0610 14:02:27.477810       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0610 14:02:27.477828       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0610 14:02:27.477869       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 14:02:27.568192       1 server.go:657] "Version info" version="v1.27.2"
	I0610 14:02:27.568226       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 14:02:27.670317       1 config.go:188] "Starting service config controller"
	I0610 14:02:27.774377       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 14:02:27.670785       1 config.go:97] "Starting endpoint slice config controller"
	I0610 14:02:27.671198       1 config.go:315] "Starting node config controller"
	I0610 14:02:27.860175       1 shared_informer.go:318] Caches are synced for service config
	I0610 14:02:27.861894       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 14:02:27.861916       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 14:02:27.865952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 14:02:27.865977       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d45567f6616f0fb0140488a781740ba293ed3ba8cb25218a2d8a23fcf9d33b15] <==
	* W0610 14:02:07.182962       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 14:02:07.182992       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 14:02:07.183074       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 14:02:07.183095       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 14:02:07.183189       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 14:02:07.183210       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 14:02:07.183602       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 14:02:07.183654       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 14:02:08.044997       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 14:02:08.045044       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 14:02:08.082473       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 14:02:08.082500       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 14:02:08.088566       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 14:02:08.088592       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 14:02:08.094751       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 14:02:08.094842       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 14:02:08.102810       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 14:02:08.102835       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 14:02:08.113868       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 14:02:08.113887       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 14:02:08.176194       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 14:02:08.176217       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 14:02:08.217625       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 14:02:08.217665       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0610 14:02:11.177482       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 10 14:06:10 addons-060929 kubelet[1553]: W0610 14:06:10.573808    1553 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/abd5aecf4a3151a313bfc164cf4107565a35f47d10980ccd8ff3ff4ea8c1364c/crio/crio-36ed711ab168ec496cf67c15a5437101e98a034b1f9a416226a233b756244908 WatchSource:0}: Error finding container 36ed711ab168ec496cf67c15a5437101e98a034b1f9a416226a233b756244908: Status 404 returned error can't find the container with id 36ed711ab168ec496cf67c15a5437101e98a034b1f9a416226a233b756244908
	Jun 10 14:06:11 addons-060929 kubelet[1553]: I0610 14:06:11.076037    1553 scope.go:115] "RemoveContainer" containerID="a0f35bc070c9ec90d10322748752c8f3a77ad6157d637ee8db42e0f3cfd45d17"
	Jun 10 14:06:11 addons-060929 kubelet[1553]: I0610 14:06:11.096960    1553 scope.go:115] "RemoveContainer" containerID="a0f35bc070c9ec90d10322748752c8f3a77ad6157d637ee8db42e0f3cfd45d17"
	Jun 10 14:06:11 addons-060929 kubelet[1553]: E0610 14:06:11.097368    1553 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0f35bc070c9ec90d10322748752c8f3a77ad6157d637ee8db42e0f3cfd45d17\": container with ID starting with a0f35bc070c9ec90d10322748752c8f3a77ad6157d637ee8db42e0f3cfd45d17 not found: ID does not exist" containerID="a0f35bc070c9ec90d10322748752c8f3a77ad6157d637ee8db42e0f3cfd45d17"
	Jun 10 14:06:11 addons-060929 kubelet[1553]: I0610 14:06:11.097416    1553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:a0f35bc070c9ec90d10322748752c8f3a77ad6157d637ee8db42e0f3cfd45d17} err="failed to get container status \"a0f35bc070c9ec90d10322748752c8f3a77ad6157d637ee8db42e0f3cfd45d17\": rpc error: code = NotFound desc = could not find container \"a0f35bc070c9ec90d10322748752c8f3a77ad6157d637ee8db42e0f3cfd45d17\": container with ID starting with a0f35bc070c9ec90d10322748752c8f3a77ad6157d637ee8db42e0f3cfd45d17 not found: ID does not exist"
	Jun 10 14:06:11 addons-060929 kubelet[1553]: I0610 14:06:11.166397    1553 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swzg8\" (UniqueName: \"kubernetes.io/projected/2c77456e-199d-463d-afda-a83abf1d090f-kube-api-access-swzg8\") pod \"2c77456e-199d-463d-afda-a83abf1d090f\" (UID: \"2c77456e-199d-463d-afda-a83abf1d090f\") "
	Jun 10 14:06:11 addons-060929 kubelet[1553]: I0610 14:06:11.168243    1553 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c77456e-199d-463d-afda-a83abf1d090f-kube-api-access-swzg8" (OuterVolumeSpecName: "kube-api-access-swzg8") pod "2c77456e-199d-463d-afda-a83abf1d090f" (UID: "2c77456e-199d-463d-afda-a83abf1d090f"). InnerVolumeSpecName "kube-api-access-swzg8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 10 14:06:11 addons-060929 kubelet[1553]: I0610 14:06:11.267334    1553 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-swzg8\" (UniqueName: \"kubernetes.io/projected/2c77456e-199d-463d-afda-a83abf1d090f-kube-api-access-swzg8\") on node \"addons-060929\" DevicePath \"\""
	Jun 10 14:06:11 addons-060929 kubelet[1553]: E0610 14:06:11.767058    1553 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7b4698b8c7-wf52k.1767514bde25d568", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7b4698b8c7-wf52k", UID:"b81896b1-2d41-4266-924c-552b7d22c52a", APIVersion:"v1", ResourceVersion:"778", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-060929"}, FirstTimestamp:time.Date(2023, time.June, 10, 14, 6, 11, 764958568, time.Local), LastTimestamp:time.Date(2023, time.June, 10, 14, 6, 11, 764958568, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7b4698b8c7-wf52k.1767514bde25d568" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 10 14:06:12 addons-060929 kubelet[1553]: I0610 14:06:12.079880    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=2acd8000-a347-4f3f-8228-9f597a8dbc92 path="/var/lib/kubelet/pods/2acd8000-a347-4f3f-8228-9f597a8dbc92/volumes"
	Jun 10 14:06:12 addons-060929 kubelet[1553]: I0610 14:06:12.080180    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=2c77456e-199d-463d-afda-a83abf1d090f path="/var/lib/kubelet/pods/2c77456e-199d-463d-afda-a83abf1d090f/volumes"
	Jun 10 14:06:12 addons-060929 kubelet[1553]: I0610 14:06:12.080457    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ac455e4c-aa5d-40d8-b098-a2d3cc84a236 path="/var/lib/kubelet/pods/ac455e4c-aa5d-40d8-b098-a2d3cc84a236/volumes"
	Jun 10 14:06:12 addons-060929 kubelet[1553]: I0610 14:06:12.089259    1553 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-65bdb79f98-rjfvj" podStartSLOduration=1.14855524 podCreationTimestamp="2023-06-10 14:06:10 +0000 UTC" firstStartedPulling="2023-06-10 14:06:10.601910693 +0000 UTC m=+240.609914042" lastFinishedPulling="2023-06-10 14:06:11.542574286 +0000 UTC m=+241.550577644" observedRunningTime="2023-06-10 14:06:12.089056558 +0000 UTC m=+242.097059922" watchObservedRunningTime="2023-06-10 14:06:12.089218842 +0000 UTC m=+242.097222205"
	Jun 10 14:06:13 addons-060929 kubelet[1553]: I0610 14:06:13.077081    1553 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b81896b1-2d41-4266-924c-552b7d22c52a-webhook-cert\") pod \"b81896b1-2d41-4266-924c-552b7d22c52a\" (UID: \"b81896b1-2d41-4266-924c-552b7d22c52a\") "
	Jun 10 14:06:13 addons-060929 kubelet[1553]: I0610 14:06:13.077177    1553 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptnpm\" (UniqueName: \"kubernetes.io/projected/b81896b1-2d41-4266-924c-552b7d22c52a-kube-api-access-ptnpm\") pod \"b81896b1-2d41-4266-924c-552b7d22c52a\" (UID: \"b81896b1-2d41-4266-924c-552b7d22c52a\") "
	Jun 10 14:06:13 addons-060929 kubelet[1553]: I0610 14:06:13.079328    1553 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81896b1-2d41-4266-924c-552b7d22c52a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b81896b1-2d41-4266-924c-552b7d22c52a" (UID: "b81896b1-2d41-4266-924c-552b7d22c52a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 14:06:13 addons-060929 kubelet[1553]: I0610 14:06:13.079532    1553 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81896b1-2d41-4266-924c-552b7d22c52a-kube-api-access-ptnpm" (OuterVolumeSpecName: "kube-api-access-ptnpm") pod "b81896b1-2d41-4266-924c-552b7d22c52a" (UID: "b81896b1-2d41-4266-924c-552b7d22c52a"). InnerVolumeSpecName "kube-api-access-ptnpm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 10 14:06:13 addons-060929 kubelet[1553]: I0610 14:06:13.083130    1553 scope.go:115] "RemoveContainer" containerID="639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470"
	Jun 10 14:06:13 addons-060929 kubelet[1553]: I0610 14:06:13.099211    1553 scope.go:115] "RemoveContainer" containerID="639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470"
	Jun 10 14:06:13 addons-060929 kubelet[1553]: E0610 14:06:13.099532    1553 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470\": container with ID starting with 639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470 not found: ID does not exist" containerID="639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470"
	Jun 10 14:06:13 addons-060929 kubelet[1553]: I0610 14:06:13.099579    1553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470} err="failed to get container status \"639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470\": rpc error: code = NotFound desc = could not find container \"639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470\": container with ID starting with 639aa78acef1e84343852508c7c880e052c6d4f47a85509e3790ec00b2497470 not found: ID does not exist"
	Jun 10 14:06:13 addons-060929 kubelet[1553]: I0610 14:06:13.178118    1553 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ptnpm\" (UniqueName: \"kubernetes.io/projected/b81896b1-2d41-4266-924c-552b7d22c52a-kube-api-access-ptnpm\") on node \"addons-060929\" DevicePath \"\""
	Jun 10 14:06:13 addons-060929 kubelet[1553]: I0610 14:06:13.178166    1553 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b81896b1-2d41-4266-924c-552b7d22c52a-webhook-cert\") on node \"addons-060929\" DevicePath \"\""
	Jun 10 14:06:14 addons-060929 kubelet[1553]: I0610 14:06:14.080144    1553 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b81896b1-2d41-4266-924c-552b7d22c52a path="/var/lib/kubelet/pods/b81896b1-2d41-4266-924c-552b7d22c52a/volumes"
	Jun 10 14:06:19 addons-060929 kubelet[1553]: W0610 14:06:19.373370    1553 container.go:586] Failed to update stats for container "/docker/abd5aecf4a3151a313bfc164cf4107565a35f47d10980ccd8ff3ff4ea8c1364c/crio/crio-1a230279ce03cbe17b67020dfbe32a930a9a4f95a6157ad13c7d4add1f9b48ad": unable to determine device info for dir: /var/lib/containers/storage/overlay/1479b8d90f76ad9402fa8c48f2b1bbe8f84d1c8993317d7575b8a3070855a4cb/diff: stat failed on /var/lib/containers/storage/overlay/1479b8d90f76ad9402fa8c48f2b1bbe8f84d1c8993317d7575b8a3070855a4cb/diff with error: no such file or directory, continuing to push stats
	
	* 
	* ==> storage-provisioner [cd4e652bdf6a7f7c4d93f3fbf0d938fa6f2b783686f06b4c40144a91e9e7a686] <==
	* I0610 14:02:58.261533       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 14:02:58.270828       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 14:02:58.270879       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 14:02:58.277069       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 14:02:58.277206       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-060929_00cd5bfb-200c-41c2-87d1-16270a435a2b!
	I0610 14:02:58.277266       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"faf9c37c-c00d-42fc-b5c4-f598cea096d9", APIVersion:"v1", ResourceVersion:"852", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-060929_00cd5bfb-200c-41c2-87d1-16270a435a2b became leader
	I0610 14:02:58.377968       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-060929_00cd5bfb-200c-41c2-87d1-16270a435a2b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-060929 -n addons-060929
helpers_test.go:261: (dbg) Run:  kubectl --context addons-060929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (150.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image load --daemon gcr.io/google-containers/addon-resizer:functional-742762 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-linux-amd64 -p functional-742762 image load --daemon gcr.io/google-containers/addon-resizer:functional-742762 --alsologtostderr: (8.619732959s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image ls
functional_test.go:446: (dbg) Done: out/minikube-linux-amd64 -p functional-742762 image ls: (2.233680758s)
functional_test.go:441: expected "gcr.io/google-containers/addon-resizer:functional-742762" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (172.97s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-889215 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-889215 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.08523455s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-889215 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-889215 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [246ce5f5-ac21-4f62-a1b9-81fdd0b93b15] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [246ce5f5-ac21-4f62-a1b9-81fdd0b93b15] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.004597521s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889215 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0610 14:13:36.156590   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:14:03.849732   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-889215 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.275107539s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-889215 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889215 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.009817291s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889215 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889215 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-889215 addons disable ingress --alsologtostderr -v=1: (7.188259101s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-889215
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-889215:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b94fc3364222ede627d0cfabd4802885b62be84289c82d3be8f0738f3f7cfa1d",
	        "Created": "2023-06-10T14:10:41.572668761Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 63793,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-10T14:10:41.845115591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/b94fc3364222ede627d0cfabd4802885b62be84289c82d3be8f0738f3f7cfa1d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b94fc3364222ede627d0cfabd4802885b62be84289c82d3be8f0738f3f7cfa1d/hostname",
	        "HostsPath": "/var/lib/docker/containers/b94fc3364222ede627d0cfabd4802885b62be84289c82d3be8f0738f3f7cfa1d/hosts",
	        "LogPath": "/var/lib/docker/containers/b94fc3364222ede627d0cfabd4802885b62be84289c82d3be8f0738f3f7cfa1d/b94fc3364222ede627d0cfabd4802885b62be84289c82d3be8f0738f3f7cfa1d-json.log",
	        "Name": "/ingress-addon-legacy-889215",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-889215:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-889215",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/645bd610f3bfcc28adfdd0a7f7a74ac4e373dd8e0acfb99609418c9f5ccf4229-init/diff:/var/lib/docker/overlay2/0dc1ddb6d62b4bee9beafd5f34260acd069d63ff74f1b10678aeef7f32badeb3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/645bd610f3bfcc28adfdd0a7f7a74ac4e373dd8e0acfb99609418c9f5ccf4229/merged",
	                "UpperDir": "/var/lib/docker/overlay2/645bd610f3bfcc28adfdd0a7f7a74ac4e373dd8e0acfb99609418c9f5ccf4229/diff",
	                "WorkDir": "/var/lib/docker/overlay2/645bd610f3bfcc28adfdd0a7f7a74ac4e373dd8e0acfb99609418c9f5ccf4229/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-889215",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-889215/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-889215",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-889215",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-889215",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "361803975fa4250f79ddfcbdaaf3701e1d38a4e22575c10232da04f235b58307",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/361803975fa4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-889215": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b94fc3364222",
	                        "ingress-addon-legacy-889215"
	                    ],
	                    "NetworkID": "32f48d24c05403fa94a1df5a7ff34d9f6df06537c43ba605da8fc7a1f78777e6",
	                    "EndpointID": "e9fdd07229d199fa532c4ec27d62e03d1296c8453e41a1237d0200f421b4eef3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-889215 -n ingress-addon-legacy-889215
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889215 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-742762                                                        | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC | 10 Jun 23 14:10 UTC |
	|         | image ls --format json                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-742762                                                        | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC | 10 Jun 23 14:10 UTC |
	|         | image ls --format table                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-742762 image build -t                                         | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC | 10 Jun 23 14:10 UTC |
	|         | localhost/my-image:functional-742762                                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                         |                             |         |         |                     |                     |
	| mount   | -p functional-742762                                                     | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdspecific-port3146465806/001:/mount-9p |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                      |                             |         |         |                     |                     |
	| ssh     | functional-742762 ssh findmnt                                            | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-742762 ssh findmnt                                            | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC | 10 Jun 23 14:10 UTC |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-742762 ssh -- ls                                              | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC | 10 Jun 23 14:10 UTC |
	|         | -la /mount-9p                                                            |                             |         |         |                     |                     |
	| ssh     | functional-742762 ssh sudo                                               | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC |                     |
	|         | umount -f /mount-9p                                                      |                             |         |         |                     |                     |
	| ssh     | functional-742762 ssh findmnt                                            | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC |                     |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| mount   | -p functional-742762                                                     | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup259915722/001:/mount2    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| mount   | -p functional-742762                                                     | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup259915722/001:/mount1    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| mount   | -p functional-742762                                                     | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup259915722/001:/mount3    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| image   | functional-742762 image ls                                               | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC | 10 Jun 23 14:10 UTC |
	| ssh     | functional-742762 ssh findmnt                                            | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC | 10 Jun 23 14:10 UTC |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| ssh     | functional-742762 ssh findmnt                                            | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC | 10 Jun 23 14:10 UTC |
	|         | -T /mount2                                                               |                             |         |         |                     |                     |
	| ssh     | functional-742762 ssh findmnt                                            | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC | 10 Jun 23 14:10 UTC |
	|         | -T /mount3                                                               |                             |         |         |                     |                     |
	| mount   | -p functional-742762                                                     | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC |                     |
	|         | --kill=true                                                              |                             |         |         |                     |                     |
	| delete  | -p functional-742762                                                     | functional-742762           | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC | 10 Jun 23 14:10 UTC |
	| start   | -p ingress-addon-legacy-889215                                           | ingress-addon-legacy-889215 | jenkins | v1.30.1 | 10 Jun 23 14:10 UTC | 10 Jun 23 14:11 UTC |
	|         | --kubernetes-version=v1.18.20                                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                     |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-889215                                              | ingress-addon-legacy-889215 | jenkins | v1.30.1 | 10 Jun 23 14:11 UTC | 10 Jun 23 14:11 UTC |
	|         | addons enable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-889215                                              | ingress-addon-legacy-889215 | jenkins | v1.30.1 | 10 Jun 23 14:11 UTC | 10 Jun 23 14:11 UTC |
	|         | addons enable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-889215                                              | ingress-addon-legacy-889215 | jenkins | v1.30.1 | 10 Jun 23 14:11 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-889215 ip                                           | ingress-addon-legacy-889215 | jenkins | v1.30.1 | 10 Jun 23 14:14 UTC | 10 Jun 23 14:14 UTC |
	| addons  | ingress-addon-legacy-889215                                              | ingress-addon-legacy-889215 | jenkins | v1.30.1 | 10 Jun 23 14:14 UTC | 10 Jun 23 14:14 UTC |
	|         | addons disable ingress-dns                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-889215                                              | ingress-addon-legacy-889215 | jenkins | v1.30.1 | 10 Jun 23 14:14 UTC | 10 Jun 23 14:14 UTC |
	|         | addons disable ingress                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 14:10:27
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 14:10:27.614918   63152 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:10:27.615033   63152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:10:27.615042   63152 out.go:309] Setting ErrFile to fd 2...
	I0610 14:10:27.615049   63152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:10:27.615168   63152 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	I0610 14:10:27.615771   63152 out.go:303] Setting JSON to false
	I0610 14:10:27.617059   63152 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6783,"bootTime":1686399445,"procs":564,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 14:10:27.617124   63152 start.go:137] virtualization: kvm guest
	I0610 14:10:27.619604   63152 out.go:177] * [ingress-addon-legacy-889215] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 14:10:27.621968   63152 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 14:10:27.623577   63152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 14:10:27.621945   63152 notify.go:220] Checking for updates...
	I0610 14:10:27.625936   63152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:10:27.627513   63152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	I0610 14:10:27.629242   63152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 14:10:27.630885   63152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 14:10:27.632623   63152 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 14:10:27.652149   63152 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 14:10:27.652232   63152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:10:27.696494   63152 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-06-10 14:10:27.687558477 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:10:27.696593   63152 docker.go:294] overlay module found
	I0610 14:10:27.698811   63152 out.go:177] * Using the docker driver based on user configuration
	I0610 14:10:27.700493   63152 start.go:297] selected driver: docker
	I0610 14:10:27.700505   63152 start.go:875] validating driver "docker" against <nil>
	I0610 14:10:27.700514   63152 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 14:10:27.701226   63152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:10:27.747366   63152 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-06-10 14:10:27.7393144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:10:27.747517   63152 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 14:10:27.747698   63152 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 14:10:27.749770   63152 out.go:177] * Using Docker driver with root privileges
	I0610 14:10:27.751299   63152 cni.go:84] Creating CNI manager for ""
	I0610 14:10:27.751311   63152 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0610 14:10:27.751319   63152 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 14:10:27.751334   63152 start_flags.go:319] config:
	{Name:ingress-addon-legacy-889215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-889215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:10:27.752986   63152 out.go:177] * Starting control plane node ingress-addon-legacy-889215 in cluster ingress-addon-legacy-889215
	I0610 14:10:27.754493   63152 cache.go:122] Beginning downloading kic base image for docker with crio
	I0610 14:10:27.756316   63152 out.go:177] * Pulling base image ...
	I0610 14:10:27.758116   63152 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0610 14:10:27.758148   63152 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 14:10:27.774006   63152 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0610 14:10:27.774028   63152 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0610 14:10:27.779959   63152 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0610 14:10:27.779988   63152 cache.go:57] Caching tarball of preloaded images
	I0610 14:10:27.780114   63152 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0610 14:10:27.782411   63152 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0610 14:10:27.784019   63152 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0610 14:10:27.816514   63152 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0610 14:10:33.467912   63152 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0610 14:10:33.468020   63152 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0610 14:10:34.483654   63152 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0610 14:10:34.484003   63152 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/config.json ...
	I0610 14:10:34.484035   63152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/config.json: {Name:mk1f4de1d66aab51402e9c423d66216126871145 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:10:34.484184   63152 cache.go:195] Successfully downloaded all kic artifacts
	I0610 14:10:34.484203   63152 start.go:364] acquiring machines lock for ingress-addon-legacy-889215: {Name:mke7b9289c7257a627bb086fe2a51d02da57b309 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:10:34.484245   63152 start.go:368] acquired machines lock for "ingress-addon-legacy-889215" in 31.095µs
	I0610 14:10:34.484262   63152 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-889215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-889215 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 14:10:34.484321   63152 start.go:125] createHost starting for "" (driver="docker")
	I0610 14:10:34.486864   63152 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0610 14:10:34.487090   63152 start.go:159] libmachine.API.Create for "ingress-addon-legacy-889215" (driver="docker")
	I0610 14:10:34.487116   63152 client.go:168] LocalClient.Create starting
	I0610 14:10:34.487171   63152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem
	I0610 14:10:34.487203   63152 main.go:141] libmachine: Decoding PEM data...
	I0610 14:10:34.487219   63152 main.go:141] libmachine: Parsing certificate...
	I0610 14:10:34.487264   63152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem
	I0610 14:10:34.487282   63152 main.go:141] libmachine: Decoding PEM data...
	I0610 14:10:34.487293   63152 main.go:141] libmachine: Parsing certificate...
	I0610 14:10:34.487551   63152 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-889215 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0610 14:10:34.502897   63152 cli_runner.go:211] docker network inspect ingress-addon-legacy-889215 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0610 14:10:34.502974   63152 network_create.go:281] running [docker network inspect ingress-addon-legacy-889215] to gather additional debugging logs...
	I0610 14:10:34.502992   63152 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-889215
	W0610 14:10:34.518159   63152 cli_runner.go:211] docker network inspect ingress-addon-legacy-889215 returned with exit code 1
	I0610 14:10:34.518185   63152 network_create.go:284] error running [docker network inspect ingress-addon-legacy-889215]: docker network inspect ingress-addon-legacy-889215: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-889215 not found
	I0610 14:10:34.518198   63152 network_create.go:286] output of [docker network inspect ingress-addon-legacy-889215]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-889215 not found
	
	** /stderr **
	I0610 14:10:34.518260   63152 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 14:10:34.533223   63152 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0006c6a00}
	I0610 14:10:34.533255   63152 network_create.go:123] attempt to create docker network ingress-addon-legacy-889215 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0610 14:10:34.533296   63152 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-889215 ingress-addon-legacy-889215
	I0610 14:10:34.584566   63152 network_create.go:107] docker network ingress-addon-legacy-889215 192.168.49.0/24 created
	I0610 14:10:34.584598   63152 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-889215" container
	I0610 14:10:34.584665   63152 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0610 14:10:34.598847   63152 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-889215 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-889215 --label created_by.minikube.sigs.k8s.io=true
	I0610 14:10:34.615121   63152 oci.go:103] Successfully created a docker volume ingress-addon-legacy-889215
	I0610 14:10:34.615202   63152 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-889215-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-889215 --entrypoint /usr/bin/test -v ingress-addon-legacy-889215:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib
	I0610 14:10:36.313247   63152 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-889215-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-889215 --entrypoint /usr/bin/test -v ingress-addon-legacy-889215:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib: (1.697988323s)
	I0610 14:10:36.313274   63152 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-889215
	I0610 14:10:36.313301   63152 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0610 14:10:36.313321   63152 kic.go:190] Starting extracting preloaded images to volume ...
	I0610 14:10:36.313382   63152 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-889215:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir
	I0610 14:10:41.513294   63152 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-889215:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir: (5.199848563s)
	I0610 14:10:41.513335   63152 kic.go:199] duration metric: took 5.200008 seconds to extract preloaded images to volume
	W0610 14:10:41.513457   63152 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0610 14:10:41.513538   63152 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0610 14:10:41.559079   63152 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-889215 --name ingress-addon-legacy-889215 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-889215 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-889215 --network ingress-addon-legacy-889215 --ip 192.168.49.2 --volume ingress-addon-legacy-889215:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
	I0610 14:10:41.853483   63152 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-889215 --format={{.State.Running}}
	I0610 14:10:41.870333   63152 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-889215 --format={{.State.Status}}
	I0610 14:10:41.886942   63152 cli_runner.go:164] Run: docker exec ingress-addon-legacy-889215 stat /var/lib/dpkg/alternatives/iptables
	I0610 14:10:41.963214   63152 oci.go:144] the created container "ingress-addon-legacy-889215" has a running status.
	I0610 14:10:41.963249   63152 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/ingress-addon-legacy-889215/id_rsa...
	I0610 14:10:42.149999   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/ingress-addon-legacy-889215/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0610 14:10:42.150047   63152 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15074-18675/.minikube/machines/ingress-addon-legacy-889215/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0610 14:10:42.168242   63152 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-889215 --format={{.State.Status}}
	I0610 14:10:42.188894   63152 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0610 14:10:42.188912   63152 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-889215 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0610 14:10:42.293054   63152 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-889215 --format={{.State.Status}}
	I0610 14:10:42.312074   63152 machine.go:88] provisioning docker machine ...
	I0610 14:10:42.312102   63152 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-889215"
	I0610 14:10:42.312144   63152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-889215
	I0610 14:10:42.329800   63152 main.go:141] libmachine: Using SSH client type: native
	I0610 14:10:42.330499   63152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0610 14:10:42.330524   63152 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-889215 && echo "ingress-addon-legacy-889215" | sudo tee /etc/hostname
	I0610 14:10:42.520297   63152 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-889215
	
	I0610 14:10:42.520371   63152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-889215
	I0610 14:10:42.536701   63152 main.go:141] libmachine: Using SSH client type: native
	I0610 14:10:42.537260   63152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0610 14:10:42.537291   63152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-889215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-889215/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-889215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 14:10:42.650176   63152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 14:10:42.650230   63152 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15074-18675/.minikube CaCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15074-18675/.minikube}
	I0610 14:10:42.650253   63152 ubuntu.go:177] setting up certificates
	I0610 14:10:42.650263   63152 provision.go:83] configureAuth start
	I0610 14:10:42.650327   63152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-889215
	I0610 14:10:42.666034   63152 provision.go:138] copyHostCerts
	I0610 14:10:42.666072   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem
	I0610 14:10:42.666107   63152 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem, removing ...
	I0610 14:10:42.666117   63152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem
	I0610 14:10:42.666232   63152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem (1675 bytes)
	I0610 14:10:42.666334   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem
	I0610 14:10:42.666367   63152 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem, removing ...
	I0610 14:10:42.666377   63152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem
	I0610 14:10:42.666422   63152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem (1078 bytes)
	I0610 14:10:42.666487   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem
	I0610 14:10:42.666517   63152 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem, removing ...
	I0610 14:10:42.666527   63152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem
	I0610 14:10:42.666563   63152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem (1123 bytes)
	I0610 14:10:42.666633   63152 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-889215 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-889215]
	I0610 14:10:42.810230   63152 provision.go:172] copyRemoteCerts
	I0610 14:10:42.810287   63152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 14:10:42.810323   63152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-889215
	I0610 14:10:42.826049   63152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/ingress-addon-legacy-889215/id_rsa Username:docker}
	I0610 14:10:42.914336   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 14:10:42.914404   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 14:10:42.934966   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 14:10:42.935032   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0610 14:10:42.955613   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 14:10:42.955680   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 14:10:42.975582   63152 provision.go:86] duration metric: configureAuth took 325.302732ms
	I0610 14:10:42.975611   63152 ubuntu.go:193] setting minikube options for container-runtime
	I0610 14:10:42.975755   63152 config.go:182] Loaded profile config "ingress-addon-legacy-889215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0610 14:10:42.975837   63152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-889215
	I0610 14:10:42.991397   63152 main.go:141] libmachine: Using SSH client type: native
	I0610 14:10:42.991950   63152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0610 14:10:42.991983   63152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 14:10:43.211455   63152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 14:10:43.211478   63152 machine.go:91] provisioned docker machine in 899.386686ms
	I0610 14:10:43.211488   63152 client.go:171] LocalClient.Create took 8.724367851s
	I0610 14:10:43.211512   63152 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-889215" took 8.724419843s
	I0610 14:10:43.211521   63152 start.go:300] post-start starting for "ingress-addon-legacy-889215" (driver="docker")
	I0610 14:10:43.211530   63152 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 14:10:43.211597   63152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 14:10:43.211647   63152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-889215
	I0610 14:10:43.228019   63152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/ingress-addon-legacy-889215/id_rsa Username:docker}
	I0610 14:10:43.315442   63152 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 14:10:43.318280   63152 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0610 14:10:43.318309   63152 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0610 14:10:43.318324   63152 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0610 14:10:43.318334   63152 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0610 14:10:43.318345   63152 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/addons for local assets ...
	I0610 14:10:43.318405   63152 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/files for local assets ...
	I0610 14:10:43.318472   63152 filesync.go:149] local asset: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem -> 254852.pem in /etc/ssl/certs
	I0610 14:10:43.318481   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem -> /etc/ssl/certs/254852.pem
	I0610 14:10:43.318568   63152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 14:10:43.325634   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem --> /etc/ssl/certs/254852.pem (1708 bytes)
	I0610 14:10:43.345531   63152 start.go:303] post-start completed in 133.994757ms
	I0610 14:10:43.345915   63152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-889215
	I0610 14:10:43.362989   63152 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/config.json ...
	I0610 14:10:43.363220   63152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 14:10:43.363262   63152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-889215
	I0610 14:10:43.380339   63152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/ingress-addon-legacy-889215/id_rsa Username:docker}
	I0610 14:10:43.462763   63152 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0610 14:10:43.466515   63152 start.go:128] duration metric: createHost completed in 8.982184636s
	I0610 14:10:43.466532   63152 start.go:83] releasing machines lock for "ingress-addon-legacy-889215", held for 8.982277966s
	I0610 14:10:43.466579   63152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-889215
	I0610 14:10:43.483323   63152 ssh_runner.go:195] Run: cat /version.json
	I0610 14:10:43.483364   63152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-889215
	I0610 14:10:43.483429   63152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 14:10:43.483483   63152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-889215
	I0610 14:10:43.498958   63152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/ingress-addon-legacy-889215/id_rsa Username:docker}
	I0610 14:10:43.501041   63152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/ingress-addon-legacy-889215/id_rsa Username:docker}
	I0610 14:10:43.669190   63152 ssh_runner.go:195] Run: systemctl --version
	I0610 14:10:43.673164   63152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 14:10:43.808693   63152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 14:10:43.812868   63152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:10:43.830362   63152 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0610 14:10:43.830446   63152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:10:43.856843   63152 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0610 14:10:43.856863   63152 start.go:481] detecting cgroup driver to use...
	I0610 14:10:43.856895   63152 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0610 14:10:43.856951   63152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 14:10:43.869602   63152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 14:10:43.880151   63152 docker.go:193] disabling cri-docker service (if available) ...
	I0610 14:10:43.880210   63152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 14:10:43.892234   63152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 14:10:43.904708   63152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 14:10:43.985151   63152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 14:10:44.064603   63152 docker.go:209] disabling docker service ...
	I0610 14:10:44.064660   63152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 14:10:44.081535   63152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 14:10:44.090976   63152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 14:10:44.167994   63152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 14:10:44.249264   63152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 14:10:44.258787   63152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 14:10:44.272104   63152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0610 14:10:44.272162   63152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:10:44.280502   63152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 14:10:44.280554   63152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:10:44.288579   63152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:10:44.296625   63152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:10:44.304762   63152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 14:10:44.312419   63152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 14:10:44.319428   63152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 14:10:44.326378   63152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 14:10:44.395468   63152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 14:10:44.490901   63152 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 14:10:44.490952   63152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 14:10:44.494009   63152 start.go:549] Will wait 60s for crictl version
	I0610 14:10:44.494061   63152 ssh_runner.go:195] Run: which crictl
	I0610 14:10:44.496918   63152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 14:10:44.527949   63152 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0610 14:10:44.528009   63152 ssh_runner.go:195] Run: crio --version
	I0610 14:10:44.559090   63152 ssh_runner.go:195] Run: crio --version
	I0610 14:10:44.593623   63152 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.5 ...
	I0610 14:10:44.595207   63152 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-889215 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 14:10:44.610823   63152 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0610 14:10:44.614072   63152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 14:10:44.623433   63152 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0610 14:10:44.623481   63152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 14:10:44.665280   63152 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0610 14:10:44.665331   63152 ssh_runner.go:195] Run: which lz4
	I0610 14:10:44.668418   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0610 14:10:44.668506   63152 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 14:10:44.671325   63152 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 14:10:44.671357   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0610 14:10:45.568791   63152 crio.go:444] Took 0.900316 seconds to copy over tarball
	I0610 14:10:45.568854   63152 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 14:10:47.756828   63152 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.187955066s)
	I0610 14:10:47.756850   63152 crio.go:451] Took 2.188038 seconds to extract the tarball
	I0610 14:10:47.756857   63152 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 14:10:47.825319   63152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 14:10:47.855526   63152 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0610 14:10:47.855547   63152 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 14:10:47.855622   63152 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 14:10:47.855631   63152 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 14:10:47.855649   63152 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0610 14:10:47.855664   63152 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 14:10:47.855709   63152 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0610 14:10:47.855723   63152 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0610 14:10:47.855795   63152 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 14:10:47.855823   63152 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0610 14:10:47.857033   63152 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0610 14:10:47.857055   63152 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0610 14:10:47.857030   63152 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 14:10:47.857035   63152 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0610 14:10:47.857120   63152 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0610 14:10:47.857032   63152 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 14:10:47.857086   63152 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 14:10:47.857326   63152 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 14:10:48.013595   63152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0610 14:10:48.014379   63152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0610 14:10:48.027926   63152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0610 14:10:48.028515   63152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0610 14:10:48.041525   63152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0610 14:10:48.055506   63152 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0610 14:10:48.055546   63152 cri.go:217] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 14:10:48.055584   63152 ssh_runner.go:195] Run: which crictl
	I0610 14:10:48.055583   63152 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0610 14:10:48.055657   63152 cri.go:217] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0610 14:10:48.055698   63152 ssh_runner.go:195] Run: which crictl
	I0610 14:10:48.072475   63152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 14:10:48.074662   63152 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0610 14:10:48.074698   63152 cri.go:217] Removing image: registry.k8s.io/coredns:1.6.7
	I0610 14:10:48.074739   63152 ssh_runner.go:195] Run: which crictl
	I0610 14:10:48.074835   63152 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0610 14:10:48.074858   63152 cri.go:217] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0610 14:10:48.074878   63152 ssh_runner.go:195] Run: which crictl
	I0610 14:10:48.083421   63152 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0610 14:10:48.083460   63152 cri.go:217] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 14:10:48.083494   63152 ssh_runner.go:195] Run: which crictl
	I0610 14:10:48.083497   63152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0610 14:10:48.083571   63152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0610 14:10:48.092614   63152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0610 14:10:48.159055   63152 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 14:10:48.163523   63152 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0610 14:10:48.163570   63152 cri.go:217] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 14:10:48.163609   63152 ssh_runner.go:195] Run: which crictl
	I0610 14:10:48.163628   63152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0610 14:10:48.163706   63152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0610 14:10:48.174384   63152 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0610 14:10:48.174484   63152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0610 14:10:48.175763   63152 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0610 14:10:48.182720   63152 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0610 14:10:48.182772   63152 cri.go:217] Removing image: registry.k8s.io/pause:3.2
	I0610 14:10:48.182815   63152 ssh_runner.go:195] Run: which crictl
	I0610 14:10:48.379879   63152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 14:10:48.379994   63152 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0610 14:10:48.380032   63152 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0610 14:10:48.380142   63152 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0610 14:10:48.409772   63152 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0610 14:10:48.409820   63152 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0610 14:10:48.439921   63152 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0610 14:10:48.439975   63152 cache_images.go:92] LoadImages completed in 584.414938ms
	W0610 14:10:48.440049   63152 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0610 14:10:48.440122   63152 ssh_runner.go:195] Run: crio config
	I0610 14:10:48.478253   63152 cni.go:84] Creating CNI manager for ""
	I0610 14:10:48.478271   63152 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0610 14:10:48.478279   63152 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 14:10:48.478294   63152 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-889215 NodeName:ingress-addon-legacy-889215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0610 14:10:48.478424   63152 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-889215"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 14:10:48.478498   63152 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-889215 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-889215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 14:10:48.478563   63152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0610 14:10:48.486278   63152 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 14:10:48.486325   63152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 14:10:48.493412   63152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0610 14:10:48.508113   63152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0610 14:10:48.523035   63152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0610 14:10:48.537741   63152 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0610 14:10:48.540701   63152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 14:10:48.549732   63152 certs.go:56] Setting up /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215 for IP: 192.168.49.2
	I0610 14:10:48.549760   63152 certs.go:190] acquiring lock for shared ca certs: {Name:mk47e57fed67616a983122d88149f57794c568cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:10:48.549891   63152 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key
	I0610 14:10:48.549946   63152 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key
	I0610 14:10:48.549998   63152 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.key
	I0610 14:10:48.550014   63152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt with IP's: []
	I0610 14:10:48.872281   63152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt ...
	I0610 14:10:48.872317   63152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: {Name:mk4e14ce95f5a2c1deb53d70f1666d98a4385bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:10:48.872514   63152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.key ...
	I0610 14:10:48.872530   63152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.key: {Name:mk1667d3fcbc48e8d3360a526a37ec9d58dcd35f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:10:48.872634   63152 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.key.dd3b5fb2
	I0610 14:10:48.872653   63152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 14:10:48.967902   63152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.crt.dd3b5fb2 ...
	I0610 14:10:48.967933   63152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.crt.dd3b5fb2: {Name:mka7becf35e55ec38e3a513bf97154c575d4fc3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:10:48.968101   63152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.key.dd3b5fb2 ...
	I0610 14:10:48.968115   63152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.key.dd3b5fb2: {Name:mk1124d2eadf7488162b72b19f52507937211ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:10:48.968204   63152 certs.go:337] copying /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.crt
	I0610 14:10:48.968307   63152 certs.go:341] copying /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.key
	I0610 14:10:48.968394   63152 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/proxy-client.key
	I0610 14:10:48.968412   63152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/proxy-client.crt with IP's: []
	I0610 14:10:49.050957   63152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/proxy-client.crt ...
	I0610 14:10:49.050988   63152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/proxy-client.crt: {Name:mk2a01be81ce584415359d49ec77736f220bc1d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:10:49.051156   63152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/proxy-client.key ...
	I0610 14:10:49.051170   63152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/proxy-client.key: {Name:mk3b50135e1eb00856fa673ba4e27255b8a4540c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:10:49.051267   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 14:10:49.051292   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 14:10:49.051307   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 14:10:49.051325   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 14:10:49.051343   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 14:10:49.051367   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 14:10:49.051393   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 14:10:49.051413   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 14:10:49.051473   63152 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485.pem (1338 bytes)
	W0610 14:10:49.051523   63152 certs.go:433] ignoring /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485_empty.pem, impossibly tiny 0 bytes
	I0610 14:10:49.051538   63152 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 14:10:49.051615   63152 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem (1078 bytes)
	I0610 14:10:49.051654   63152 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem (1123 bytes)
	I0610 14:10:49.051732   63152 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem (1675 bytes)
	I0610 14:10:49.051801   63152 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem (1708 bytes)
	I0610 14:10:49.051842   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem -> /usr/share/ca-certificates/254852.pem
	I0610 14:10:49.051861   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:10:49.051876   63152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485.pem -> /usr/share/ca-certificates/25485.pem
	I0610 14:10:49.052427   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 14:10:49.073696   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 14:10:49.094824   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 14:10:49.115718   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 14:10:49.136285   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 14:10:49.157976   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 14:10:49.180294   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 14:10:49.202750   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 14:10:49.226626   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem --> /usr/share/ca-certificates/254852.pem (1708 bytes)
	I0610 14:10:49.250252   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 14:10:49.271992   63152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485.pem --> /usr/share/ca-certificates/25485.pem (1338 bytes)
	I0610 14:10:49.293632   63152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 14:10:49.309607   63152 ssh_runner.go:195] Run: openssl version
	I0610 14:10:49.314494   63152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 14:10:49.322522   63152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:10:49.325738   63152 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 14:02 /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:10:49.325778   63152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:10:49.332032   63152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 14:10:49.340315   63152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25485.pem && ln -fs /usr/share/ca-certificates/25485.pem /etc/ssl/certs/25485.pem"
	I0610 14:10:49.348369   63152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25485.pem
	I0610 14:10:49.351572   63152 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 14:07 /usr/share/ca-certificates/25485.pem
	I0610 14:10:49.351619   63152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25485.pem
	I0610 14:10:49.357726   63152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25485.pem /etc/ssl/certs/51391683.0"
	I0610 14:10:49.365764   63152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254852.pem && ln -fs /usr/share/ca-certificates/254852.pem /etc/ssl/certs/254852.pem"
	I0610 14:10:49.373836   63152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254852.pem
	I0610 14:10:49.377087   63152 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 14:07 /usr/share/ca-certificates/254852.pem
	I0610 14:10:49.377137   63152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254852.pem
	I0610 14:10:49.383134   63152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254852.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 14:10:49.391020   63152 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 14:10:49.393830   63152 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 14:10:49.393882   63152 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-889215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-889215 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:10:49.393959   63152 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 14:10:49.394000   63152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 14:10:49.427282   63152 cri.go:88] found id: ""
	I0610 14:10:49.427343   63152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 14:10:49.435528   63152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 14:10:49.443414   63152 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0610 14:10:49.443486   63152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 14:10:49.450963   63152 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 14:10:49.451006   63152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0610 14:10:49.495234   63152 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0610 14:10:49.495281   63152 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 14:10:49.531563   63152 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0610 14:10:49.531643   63152 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1035-gcp
	I0610 14:10:49.531675   63152 kubeadm.go:322] OS: Linux
	I0610 14:10:49.531714   63152 kubeadm.go:322] CGROUPS_CPU: enabled
	I0610 14:10:49.531774   63152 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0610 14:10:49.531836   63152 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0610 14:10:49.531913   63152 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0610 14:10:49.531958   63152 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0610 14:10:49.531999   63152 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0610 14:10:49.597285   63152 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 14:10:49.597445   63152 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 14:10:49.597594   63152 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 14:10:49.768151   63152 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 14:10:49.769160   63152 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 14:10:49.769201   63152 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 14:10:49.839563   63152 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 14:10:49.843764   63152 out.go:204]   - Generating certificates and keys ...
	I0610 14:10:49.843889   63152 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 14:10:49.844056   63152 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 14:10:49.892523   63152 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 14:10:50.111085   63152 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 14:10:50.404472   63152 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 14:10:50.564704   63152 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 14:10:50.852554   63152 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 14:10:50.852771   63152 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-889215 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0610 14:10:51.087994   63152 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 14:10:51.088161   63152 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-889215 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0610 14:10:51.422124   63152 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 14:10:51.568320   63152 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 14:10:51.744765   63152 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 14:10:51.744935   63152 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 14:10:51.951219   63152 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 14:10:52.257192   63152 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 14:10:52.796048   63152 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 14:10:52.918085   63152 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 14:10:52.918667   63152 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 14:10:52.921063   63152 out.go:204]   - Booting up control plane ...
	I0610 14:10:52.921165   63152 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 14:10:52.925978   63152 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 14:10:52.927033   63152 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 14:10:52.927867   63152 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 14:10:52.931196   63152 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 14:10:59.433238   63152 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502002 seconds
	I0610 14:10:59.433393   63152 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 14:10:59.445563   63152 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 14:10:59.959584   63152 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 14:10:59.959773   63152 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-889215 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0610 14:11:00.466927   63152 kubeadm.go:322] [bootstrap-token] Using token: gutvnv.ionu4zkpalarqgb1
	I0610 14:11:00.468892   63152 out.go:204]   - Configuring RBAC rules ...
	I0610 14:11:00.468997   63152 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 14:11:00.471813   63152 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 14:11:00.477555   63152 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 14:11:00.479241   63152 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 14:11:00.481064   63152 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 14:11:00.482757   63152 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 14:11:00.489456   63152 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 14:11:00.713232   63152 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 14:11:00.881745   63152 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 14:11:00.882911   63152 kubeadm.go:322] 
	I0610 14:11:00.883088   63152 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 14:11:00.883109   63152 kubeadm.go:322] 
	I0610 14:11:00.883202   63152 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 14:11:00.883230   63152 kubeadm.go:322] 
	I0610 14:11:00.883265   63152 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 14:11:00.883353   63152 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 14:11:00.883440   63152 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 14:11:00.883466   63152 kubeadm.go:322] 
	I0610 14:11:00.883536   63152 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 14:11:00.883641   63152 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 14:11:00.883742   63152 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 14:11:00.883753   63152 kubeadm.go:322] 
	I0610 14:11:00.883885   63152 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 14:11:00.883995   63152 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 14:11:00.884004   63152 kubeadm.go:322] 
	I0610 14:11:00.884166   63152 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gutvnv.ionu4zkpalarqgb1 \
	I0610 14:11:00.884424   63152 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f7c27fba2457aced24afc8e692292ec6bc66665a6c8292c6979f6ce9f519ecd4 \
	I0610 14:11:00.884486   63152 kubeadm.go:322]     --control-plane 
	I0610 14:11:00.884501   63152 kubeadm.go:322] 
	I0610 14:11:00.884627   63152 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 14:11:00.884642   63152 kubeadm.go:322] 
	I0610 14:11:00.884759   63152 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gutvnv.ionu4zkpalarqgb1 \
	I0610 14:11:00.884899   63152 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f7c27fba2457aced24afc8e692292ec6bc66665a6c8292c6979f6ce9f519ecd4 
	I0610 14:11:00.886294   63152 kubeadm.go:322] W0610 14:10:49.494706    1384 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0610 14:11:00.886542   63152 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1035-gcp\n", err: exit status 1
	I0610 14:11:00.886650   63152 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 14:11:00.886788   63152 kubeadm.go:322] W0610 14:10:52.925693    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0610 14:11:00.886976   63152 kubeadm.go:322] W0610 14:10:52.926758    1384 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0610 14:11:00.887014   63152 cni.go:84] Creating CNI manager for ""
	I0610 14:11:00.887023   63152 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0610 14:11:00.889322   63152 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 14:11:00.891202   63152 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 14:11:00.895499   63152 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0610 14:11:00.895519   63152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 14:11:00.912766   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 14:11:01.360010   63152 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 14:11:01.360120   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:01.360126   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=3273891fc7fc0f39c65075197baa2d52fc489f6f minikube.k8s.io/name=ingress-addon-legacy-889215 minikube.k8s.io/updated_at=2023_06_10T14_11_01_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:01.367126   63152 ops.go:34] apiserver oom_adj: -16
	I0610 14:11:01.467650   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:02.035179   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:02.534644   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:03.034790   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:03.535305   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:04.034948   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:04.535496   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:05.034625   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:05.535120   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:06.034811   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:06.535621   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:07.034836   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:07.534969   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:08.034596   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:08.534912   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:09.034602   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:09.535369   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:10.034789   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:10.535247   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:11.034542   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:11.535442   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:12.035403   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:12.535424   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:13.035383   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:13.535090   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:14.035239   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:14.534988   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:15.035013   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:15.535434   63152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:11:15.607672   63152 kubeadm.go:1076] duration metric: took 14.247613417s to wait for elevateKubeSystemPrivileges.
	I0610 14:11:15.607709   63152 kubeadm.go:406] StartCluster complete in 26.213832088s
	I0610 14:11:15.607739   63152 settings.go:142] acquiring lock: {Name:mk5881f609c073bbe2e65c237b3cf267f8761582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:11:15.607810   63152 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:11:15.608504   63152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/kubeconfig: {Name:mk5649556a15e88039256d0bd607afdddb4a6ce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:11:15.608726   63152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 14:11:15.608825   63152 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0610 14:11:15.608919   63152 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-889215"
	I0610 14:11:15.608934   63152 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-889215"
	I0610 14:11:15.608943   63152 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-889215"
	I0610 14:11:15.608939   63152 config.go:182] Loaded profile config "ingress-addon-legacy-889215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0610 14:11:15.608952   63152 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-889215"
	I0610 14:11:15.609014   63152 host.go:66] Checking if "ingress-addon-legacy-889215" exists ...
	I0610 14:11:15.609277   63152 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-889215 --format={{.State.Status}}
	I0610 14:11:15.609281   63152 kapi.go:59] client config for ingress-addon-legacy-889215: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt", KeyFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.key", CAFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bb8e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 14:11:15.609469   63152 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-889215 --format={{.State.Status}}
	I0610 14:11:15.610109   63152 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 14:11:15.631072   63152 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 14:11:15.629564   63152 kapi.go:59] client config for ingress-addon-legacy-889215: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt", KeyFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.key", CAFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bb8e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 14:11:15.633083   63152 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 14:11:15.633104   63152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 14:11:15.633178   63152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-889215
	I0610 14:11:15.633347   63152 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-889215"
	I0610 14:11:15.633390   63152 host.go:66] Checking if "ingress-addon-legacy-889215" exists ...
	I0610 14:11:15.633818   63152 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-889215 --format={{.State.Status}}
	I0610 14:11:15.650507   63152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/ingress-addon-legacy-889215/id_rsa Username:docker}
	I0610 14:11:15.652424   63152 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 14:11:15.652440   63152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 14:11:15.652495   63152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-889215
	I0610 14:11:15.675304   63152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/ingress-addon-legacy-889215/id_rsa Username:docker}
	I0610 14:11:15.769424   63152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 14:11:15.784379   63152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 14:11:15.880415   63152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 14:11:16.161806   63152 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-889215" context rescaled to 1 replicas
	I0610 14:11:16.161846   63152 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 14:11:16.164317   63152 out.go:177] * Verifying Kubernetes components...
	I0610 14:11:16.166742   63152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 14:11:16.268009   63152 start.go:916] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0610 14:11:16.366840   63152 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0610 14:11:16.365606   63152 kapi.go:59] client config for ingress-addon-legacy-889215: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt", KeyFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.key", CAFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bb8e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 14:11:16.368602   63152 addons.go:499] enable addons completed in 759.773717ms: enabled=[storage-provisioner default-storageclass]
	I0610 14:11:16.367168   63152 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-889215" to be "Ready" ...
	I0610 14:11:18.373686   63152 node_ready.go:58] node "ingress-addon-legacy-889215" has status "Ready":"False"
	I0610 14:11:20.374065   63152 node_ready.go:58] node "ingress-addon-legacy-889215" has status "Ready":"False"
	I0610 14:11:21.549385   63152 node_ready.go:49] node "ingress-addon-legacy-889215" has status "Ready":"True"
	I0610 14:11:21.549418   63152 node_ready.go:38] duration metric: took 5.180780125s waiting for node "ingress-addon-legacy-889215" to be "Ready" ...
	I0610 14:11:21.549430   63152 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 14:11:21.787645   63152 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-sv6dq" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:23.867720   63152 pod_ready.go:102] pod "coredns-66bff467f8-sv6dq" in "kube-system" namespace has status "Ready":"False"
	I0610 14:11:24.867607   63152 pod_ready.go:92] pod "coredns-66bff467f8-sv6dq" in "kube-system" namespace has status "Ready":"True"
	I0610 14:11:24.867628   63152 pod_ready.go:81] duration metric: took 3.079955721s waiting for pod "coredns-66bff467f8-sv6dq" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:24.867637   63152 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-889215" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:24.871338   63152 pod_ready.go:92] pod "etcd-ingress-addon-legacy-889215" in "kube-system" namespace has status "Ready":"True"
	I0610 14:11:24.871354   63152 pod_ready.go:81] duration metric: took 3.712117ms waiting for pod "etcd-ingress-addon-legacy-889215" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:24.871372   63152 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-889215" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:24.875111   63152 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-889215" in "kube-system" namespace has status "Ready":"True"
	I0610 14:11:24.875132   63152 pod_ready.go:81] duration metric: took 3.753293ms waiting for pod "kube-apiserver-ingress-addon-legacy-889215" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:24.875144   63152 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-889215" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:24.880279   63152 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-889215" in "kube-system" namespace has status "Ready":"True"
	I0610 14:11:24.880298   63152 pod_ready.go:81] duration metric: took 5.147525ms waiting for pod "kube-controller-manager-ingress-addon-legacy-889215" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:24.880306   63152 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-plk42" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:24.883829   63152 pod_ready.go:92] pod "kube-proxy-plk42" in "kube-system" namespace has status "Ready":"True"
	I0610 14:11:24.883849   63152 pod_ready.go:81] duration metric: took 3.537518ms waiting for pod "kube-proxy-plk42" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:24.883856   63152 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-889215" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:25.063234   63152 request.go:628] Waited for 179.311634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-889215
	I0610 14:11:25.263069   63152 request.go:628] Waited for 197.275078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-889215
	I0610 14:11:25.265607   63152 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-889215" in "kube-system" namespace has status "Ready":"True"
	I0610 14:11:25.265629   63152 pod_ready.go:81] duration metric: took 381.766243ms waiting for pod "kube-scheduler-ingress-addon-legacy-889215" in "kube-system" namespace to be "Ready" ...
	I0610 14:11:25.265643   63152 pod_ready.go:38] duration metric: took 3.716193399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 14:11:25.265702   63152 api_server.go:52] waiting for apiserver process to appear ...
	I0610 14:11:25.265752   63152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 14:11:25.275627   63152 api_server.go:72] duration metric: took 9.113745246s to wait for apiserver process to appear ...
	I0610 14:11:25.275645   63152 api_server.go:88] waiting for apiserver healthz status ...
	I0610 14:11:25.275658   63152 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0610 14:11:25.280240   63152 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0610 14:11:25.281132   63152 api_server.go:141] control plane version: v1.18.20
	I0610 14:11:25.281157   63152 api_server.go:131] duration metric: took 5.505863ms to wait for apiserver health ...
	I0610 14:11:25.281165   63152 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 14:11:25.463564   63152 request.go:628] Waited for 182.324202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0610 14:11:25.468312   63152 system_pods.go:59] 8 kube-system pods found
	I0610 14:11:25.468343   63152 system_pods.go:61] "coredns-66bff467f8-sv6dq" [881702c3-dccc-4da7-87b4-53adec2f2db7] Running
	I0610 14:11:25.468350   63152 system_pods.go:61] "etcd-ingress-addon-legacy-889215" [7d06e3e0-2aec-420a-ac8d-2f5fa877be3d] Running
	I0610 14:11:25.468356   63152 system_pods.go:61] "kindnet-8nrpn" [9c712960-0aa0-487e-9036-bf6bac30080d] Running
	I0610 14:11:25.468362   63152 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-889215" [5bdcb131-0278-41c9-80df-f5d44c700e18] Running
	I0610 14:11:25.468368   63152 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-889215" [7ff5d3d2-69c9-4ea4-9f2a-32c9292bd651] Running
	I0610 14:11:25.468374   63152 system_pods.go:61] "kube-proxy-plk42" [99a51935-3baa-4634-bdad-0ab767409733] Running
	I0610 14:11:25.468380   63152 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-889215" [1941cb72-6af9-464f-bb26-c37b43cbf0cc] Running
	I0610 14:11:25.468387   63152 system_pods.go:61] "storage-provisioner" [bd256fe7-f2de-4db9-8c05-e568fe1afe71] Running
	I0610 14:11:25.468403   63152 system_pods.go:74] duration metric: took 187.226276ms to wait for pod list to return data ...
	I0610 14:11:25.468416   63152 default_sa.go:34] waiting for default service account to be created ...
	I0610 14:11:25.662960   63152 request.go:628] Waited for 194.460285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0610 14:11:25.665441   63152 default_sa.go:45] found service account: "default"
	I0610 14:11:25.665465   63152 default_sa.go:55] duration metric: took 197.043352ms for default service account to be created ...
	I0610 14:11:25.665474   63152 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 14:11:25.863919   63152 request.go:628] Waited for 198.365677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0610 14:11:25.868950   63152 system_pods.go:86] 8 kube-system pods found
	I0610 14:11:25.868974   63152 system_pods.go:89] "coredns-66bff467f8-sv6dq" [881702c3-dccc-4da7-87b4-53adec2f2db7] Running
	I0610 14:11:25.868979   63152 system_pods.go:89] "etcd-ingress-addon-legacy-889215" [7d06e3e0-2aec-420a-ac8d-2f5fa877be3d] Running
	I0610 14:11:25.868983   63152 system_pods.go:89] "kindnet-8nrpn" [9c712960-0aa0-487e-9036-bf6bac30080d] Running
	I0610 14:11:25.868988   63152 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-889215" [5bdcb131-0278-41c9-80df-f5d44c700e18] Running
	I0610 14:11:25.868992   63152 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-889215" [7ff5d3d2-69c9-4ea4-9f2a-32c9292bd651] Running
	I0610 14:11:25.868996   63152 system_pods.go:89] "kube-proxy-plk42" [99a51935-3baa-4634-bdad-0ab767409733] Running
	I0610 14:11:25.868999   63152 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-889215" [1941cb72-6af9-464f-bb26-c37b43cbf0cc] Running
	I0610 14:11:25.869003   63152 system_pods.go:89] "storage-provisioner" [bd256fe7-f2de-4db9-8c05-e568fe1afe71] Running
	I0610 14:11:25.869009   63152 system_pods.go:126] duration metric: took 203.530948ms to wait for k8s-apps to be running ...
	I0610 14:11:25.869025   63152 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 14:11:25.869072   63152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 14:11:25.879357   63152 system_svc.go:56] duration metric: took 10.325261ms WaitForService to wait for kubelet.
	I0610 14:11:25.879377   63152 kubeadm.go:581] duration metric: took 9.717499309s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 14:11:25.879407   63152 node_conditions.go:102] verifying NodePressure condition ...
	I0610 14:11:26.063790   63152 request.go:628] Waited for 184.320719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0610 14:11:26.066521   63152 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0610 14:11:26.066548   63152 node_conditions.go:123] node cpu capacity is 8
	I0610 14:11:26.066557   63152 node_conditions.go:105] duration metric: took 187.146098ms to run NodePressure ...
	I0610 14:11:26.066566   63152 start.go:228] waiting for startup goroutines ...
	I0610 14:11:26.066574   63152 start.go:233] waiting for cluster config update ...
	I0610 14:11:26.066583   63152 start.go:242] writing updated cluster config ...
	I0610 14:11:26.066829   63152 ssh_runner.go:195] Run: rm -f paused
	I0610 14:11:26.110745   63152 start.go:573] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0610 14:11:26.113192   63152 out.go:177] 
	W0610 14:11:26.114875   63152 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0610 14:11:26.116580   63152 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0610 14:11:26.118319   63152 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-889215" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jun 10 14:14:06 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:06.305999105Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-jfpld/hello-world-app" id=57d9274f-24cb-48c3-8504-433377cfba2b name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jun 10 14:14:06 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:06.306110344Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 10 14:14:06 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:06.391211485Z" level=info msg="Created container 34f30ff51e81c778b6c681aa4092deb9313ea90ac4b9eea19c5ecd49c63b7b12: default/hello-world-app-5f5d8b66bb-jfpld/hello-world-app" id=57d9274f-24cb-48c3-8504-433377cfba2b name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Jun 10 14:14:06 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:06.391636541Z" level=info msg="Starting container: 34f30ff51e81c778b6c681aa4092deb9313ea90ac4b9eea19c5ecd49c63b7b12" id=86bcc3d8-d4d4-409b-a717-a5914da74a88 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Jun 10 14:14:06 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:06.399849256Z" level=info msg="Started container" PID=4679 containerID=34f30ff51e81c778b6c681aa4092deb9313ea90ac4b9eea19c5ecd49c63b7b12 description=default/hello-world-app-5f5d8b66bb-jfpld/hello-world-app id=86bcc3d8-d4d4-409b-a717-a5914da74a88 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=a647771df177a97a209ea7fa8cf1bbcaeb8d1a98ca03d4bbe9c021590201c233
	Jun 10 14:14:08 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:08.076545434Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=29cc0bd7-223b-429e-a0b5-0e76ef8ca7dd name=/runtime.v1alpha2.ImageService/ImageStatus
	Jun 10 14:14:21 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:21.076297771Z" level=info msg="Stopping pod sandbox: dd6518b174362c3607e7e42f596632159e2ac84a35c6c418c2c4489d224dbaf9" id=617bd2d0-e368-4b9d-9c3c-d17d53743e47 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 10 14:14:21 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:21.077412841Z" level=info msg="Stopped pod sandbox: dd6518b174362c3607e7e42f596632159e2ac84a35c6c418c2c4489d224dbaf9" id=617bd2d0-e368-4b9d-9c3c-d17d53743e47 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 10 14:14:21 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:21.633469689Z" level=info msg="Stopping container: c128a1853e0a7b9481f164f976014c46d2c3b38cccf4a9f06a6536672a3ec109 (timeout: 2s)" id=8a819d79-e82c-4401-8ee3-4f02ecf703e4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 10 14:14:21 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:21.635485437Z" level=info msg="Stopping container: c128a1853e0a7b9481f164f976014c46d2c3b38cccf4a9f06a6536672a3ec109 (timeout: 2s)" id=dfab2ec7-0f68-417f-93cd-8d8ab1dabc40 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.642774485Z" level=warning msg="Stopping container c128a1853e0a7b9481f164f976014c46d2c3b38cccf4a9f06a6536672a3ec109 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=8a819d79-e82c-4401-8ee3-4f02ecf703e4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 10 14:14:23 ingress-addon-legacy-889215 conmon[3371]: conmon c128a1853e0a7b9481f1 <ninfo>: container 3383 exited with status 137
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.803316964Z" level=info msg="Stopped container c128a1853e0a7b9481f164f976014c46d2c3b38cccf4a9f06a6536672a3ec109: ingress-nginx/ingress-nginx-controller-7fcf777cb7-b785d/controller" id=dfab2ec7-0f68-417f-93cd-8d8ab1dabc40 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.803358107Z" level=info msg="Stopped container c128a1853e0a7b9481f164f976014c46d2c3b38cccf4a9f06a6536672a3ec109: ingress-nginx/ingress-nginx-controller-7fcf777cb7-b785d/controller" id=8a819d79-e82c-4401-8ee3-4f02ecf703e4 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.803855933Z" level=info msg="Stopping pod sandbox: b562ce48c2cd6a0da4cd30320854efc0e284431d557a61380a19b2c2e7bbf714" id=cf37626b-ca96-41bd-a425-8897b070cce0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.803983154Z" level=info msg="Stopping pod sandbox: b562ce48c2cd6a0da4cd30320854efc0e284431d557a61380a19b2c2e7bbf714" id=030ee9b9-2621-42ed-b22f-600199709849 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.806743998Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-QH7VBNZIUEY3ZDK5 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-GGGVHVI77BJTV5HU - [0:0]\n-X KUBE-HP-QH7VBNZIUEY3ZDK5\n-X KUBE-HP-GGGVHVI77BJTV5HU\nCOMMIT\n"
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.807923546Z" level=info msg="Closing host port tcp:80"
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.807957396Z" level=info msg="Closing host port tcp:443"
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.808905296Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.808921277Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.809029358Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-b785d Namespace:ingress-nginx ID:b562ce48c2cd6a0da4cd30320854efc0e284431d557a61380a19b2c2e7bbf714 UID:61c94a2a-3fad-4cfb-8c6a-394c988d6d2f NetNS:/var/run/netns/4219e0e5-93cc-4fe7-bb29-66458fd989f3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.809137622Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-b785d from CNI network \"kindnet\" (type=ptp)"
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.835402015Z" level=info msg="Stopped pod sandbox: b562ce48c2cd6a0da4cd30320854efc0e284431d557a61380a19b2c2e7bbf714" id=cf37626b-ca96-41bd-a425-8897b070cce0 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 10 14:14:23 ingress-addon-legacy-889215 crio[961]: time="2023-06-10 14:14:23.835515279Z" level=info msg="Stopped pod sandbox (already stopped): b562ce48c2cd6a0da4cd30320854efc0e284431d557a61380a19b2c2e7bbf714" id=030ee9b9-2621-42ed-b22f-600199709849 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	34f30ff51e81c       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            22 seconds ago      Running             hello-world-app           0                   a647771df177a       hello-world-app-5f5d8b66bb-jfpld
	d8be1955b91a6       docker.io/library/nginx@sha256:0b0af14a00ea0e4fd9b09e77d2b89b71b5c5a97f9aa073553f355415bc34ae33                    2 minutes ago       Running             nginx                     0                   403873dfc4d9b       nginx
	c128a1853e0a7       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   b562ce48c2cd6       ingress-nginx-controller-7fcf777cb7-b785d
	73132bc88f222       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   0664b24f39892       ingress-nginx-admission-patch-k8rxz
	72a7bac7b1a42       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   3c47742ff4d18       ingress-nginx-admission-create-cp22k
	845f7f10bef2a       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   801b5ed2e2c30       coredns-66bff467f8-sv6dq
	4bfc5576adeb6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   0ed8dea518245       storage-provisioner
	dbeeb85b74000       docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974                 3 minutes ago       Running             kindnet-cni               0                   7cb8bf0b9cb2c       kindnet-8nrpn
	18cc390a2770e       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   da8e9ab541219       kube-proxy-plk42
	2bd24f475db0e       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   0fd9fb5e5e23f       kube-scheduler-ingress-addon-legacy-889215
	09a992943f25f       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   56332d1933ef4       etcd-ingress-addon-legacy-889215
	0a431e1fa3fa2       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   51ae2c67072c4       kube-apiserver-ingress-addon-legacy-889215
	e153a626ee24c       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   fa70854eaa09b       kube-controller-manager-ingress-addon-legacy-889215
	
	* 
	* ==> coredns [845f7f10bef2a31b995aa10986b71a034bcd12c80ab38d6aa995ab9e9cfb8b80] <==
	* [INFO] 10.244.0.5:55362 - 17234 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.010906278s
	[INFO] 10.244.0.5:46123 - 2037 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005081244s
	[INFO] 10.244.0.5:52393 - 46863 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005059038s
	[INFO] 10.244.0.5:55362 - 12480 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004973991s
	[INFO] 10.244.0.5:41147 - 25483 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005299159s
	[INFO] 10.244.0.5:55828 - 63637 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005272561s
	[INFO] 10.244.0.5:46622 - 10061 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005219982s
	[INFO] 10.244.0.5:49212 - 52012 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005222805s
	[INFO] 10.244.0.5:49885 - 59642 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005475187s
	[INFO] 10.244.0.5:55362 - 36659 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005916821s
	[INFO] 10.244.0.5:41147 - 23335 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00603953s
	[INFO] 10.244.0.5:49885 - 10940 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005739055s
	[INFO] 10.244.0.5:46622 - 27955 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006010369s
	[INFO] 10.244.0.5:46123 - 28677 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006250544s
	[INFO] 10.244.0.5:55828 - 804 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006113954s
	[INFO] 10.244.0.5:49212 - 35762 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006121821s
	[INFO] 10.244.0.5:52393 - 9035 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006247533s
	[INFO] 10.244.0.5:46622 - 53145 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000106342s
	[INFO] 10.244.0.5:55362 - 8647 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000140198s
	[INFO] 10.244.0.5:52393 - 58824 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004579s
	[INFO] 10.244.0.5:46123 - 47866 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000178935s
	[INFO] 10.244.0.5:55828 - 63785 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000189189s
	[INFO] 10.244.0.5:49885 - 31758 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000234009s
	[INFO] 10.244.0.5:49212 - 6025 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000083016s
	[INFO] 10.244.0.5:41147 - 33307 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000429186s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-889215
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-889215
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3273891fc7fc0f39c65075197baa2d52fc489f6f
	                    minikube.k8s.io/name=ingress-addon-legacy-889215
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T14_11_01_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 14:10:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-889215
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 14:14:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 14:12:01 +0000   Sat, 10 Jun 2023 14:10:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 14:12:01 +0000   Sat, 10 Jun 2023 14:10:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 14:12:01 +0000   Sat, 10 Jun 2023 14:10:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 14:12:01 +0000   Sat, 10 Jun 2023 14:11:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-889215
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a9be25b04f74d1b931d4c8c508adf33
	  System UUID:                15a2d875-66c1-4872-95d5-3873c678a3cf
	  Boot ID:                    e810f687-8f99-49aa-a9be-3ee9974bdd8c
	  Kernel Version:             5.15.0-1035-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-jfpld                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-66bff467f8-sv6dq                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m14s
	  kube-system                 etcd-ingress-addon-legacy-889215                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kindnet-8nrpn                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m13s
	  kube-system                 kube-apiserver-ingress-addon-legacy-889215             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-889215    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kube-proxy-plk42                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 kube-scheduler-ingress-addon-legacy-889215             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m28s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m28s  kubelet     Node ingress-addon-legacy-889215 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m28s  kubelet     Node ingress-addon-legacy-889215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m28s  kubelet     Node ingress-addon-legacy-889215 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m13s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m8s   kubelet     Node ingress-addon-legacy-889215 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004913] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006589] FS-Cache: N-cookie d=00000000cd7bd88f{9p.inode} n=00000000111740fd
	[  +0.007346] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.303680] FS-Cache: Duplicate cookie detected
	[  +0.004786] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006739] FS-Cache: O-cookie d=00000000cd7bd88f{9p.inode} n=000000001b29a883
	[  +0.007358] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004928] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006593] FS-Cache: N-cookie d=00000000cd7bd88f{9p.inode} n=000000000c71919a
	[  +0.008749] FS-Cache: N-key=[8] '0690130200000000'
	[  +1.834498] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jun10 14:11] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[  +1.000421] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[  +2.015794] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[Jun10 14:12] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[  +8.191118] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[ +16.126260] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[ +33.020471] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	
	* 
	* ==> etcd [09a992943f25fbfcbaaaf6863ad47357255990b4211ea167c645ffd0fedfaa0c] <==
	* 2023-06-10 14:10:53.975371 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-06-10 14:10:53.978031 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-06-10 14:10:53.978185 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-06-10 14:10:53.978279 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/06/10 14:10:54 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/06/10 14:10:54 INFO: aec36adc501070cc became candidate at term 2
	raft2023/06/10 14:10:54 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/06/10 14:10:54 INFO: aec36adc501070cc became leader at term 2
	raft2023/06/10 14:10:54 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-06-10 14:10:54.866878 I | etcdserver: setting up the initial cluster version to 3.4
	2023-06-10 14:10:54.867739 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-06-10 14:10:54.867793 I | etcdserver/api: enabled capabilities for version 3.4
	2023-06-10 14:10:54.867843 I | etcdserver: published {Name:ingress-addon-legacy-889215 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-06-10 14:10:54.867868 I | embed: ready to serve client requests
	2023-06-10 14:10:54.867956 I | embed: ready to serve client requests
	2023-06-10 14:10:54.869376 I | embed: serving client requests on 192.168.49.2:2379
	2023-06-10 14:10:54.870496 I | embed: serving client requests on 127.0.0.1:2379
	2023-06-10 14:11:21.199134 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-ingress-addon-legacy-889215\" " with result "range_response_count:1 size:4788" took too long (120.360159ms) to execute
	2023-06-10 14:11:21.543706 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-ingress-addon-legacy-889215\" " with result "range_response_count:1 size:6680" took too long (304.5186ms) to execute
	2023-06-10 14:11:21.544229 W | etcdserver: request "header:<ID:8128021672521208246 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/ingress-addon-legacy-889215\" mod_revision:385 > success:<request_put:<key:\"/registry/minions/ingress-addon-legacy-889215\" value_size:6323 >> failure:<request_range:<key:\"/registry/minions/ingress-addon-legacy-889215\" > >>" with result "size:16" took too long (168.700543ms) to execute
	2023-06-10 14:11:21.544718 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-sv6dq\" " with result "range_response_count:1 size:3753" took too long (305.581007ms) to execute
	2023-06-10 14:11:21.545383 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-889215\" " with result "range_response_count:1 size:6390" took too long (172.980558ms) to execute
	2023-06-10 14:11:21.781877 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:8 size:37576" took too long (231.373105ms) to execute
	2023-06-10 14:11:21.995194 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-ingress-addon-legacy-889215\" " with result "range_response_count:1 size:6682" took too long (132.967249ms) to execute
	2023-06-10 14:11:22.173360 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2695" took too long (171.616682ms) to execute
	
	* 
	* ==> kernel <==
	*  14:14:29 up  1:57,  0 users,  load average: 0.41, 1.06, 0.77
	Linux ingress-addon-legacy-889215 5.15.0-1035-gcp #43~20.04.1-Ubuntu SMP Mon May 22 16:49:11 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [dbeeb85b74000bcd5a582091c7caf56e93df08f66b6efc001e5aa773dbe90691] <==
	* I0610 14:12:28.717784       1 main.go:227] handling current node
	I0610 14:12:38.720840       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:12:38.720869       1 main.go:227] handling current node
	I0610 14:12:48.725589       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:12:48.725611       1 main.go:227] handling current node
	I0610 14:12:58.737240       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:12:58.737266       1 main.go:227] handling current node
	I0610 14:13:08.741098       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:13:08.741121       1 main.go:227] handling current node
	I0610 14:13:18.753008       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:13:18.753032       1 main.go:227] handling current node
	I0610 14:13:28.756850       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:13:28.756874       1 main.go:227] handling current node
	I0610 14:13:38.769189       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:13:38.769217       1 main.go:227] handling current node
	I0610 14:13:48.773336       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:13:48.773360       1 main.go:227] handling current node
	I0610 14:13:58.785162       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:13:58.785189       1 main.go:227] handling current node
	I0610 14:14:08.788107       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:14:08.788129       1 main.go:227] handling current node
	I0610 14:14:18.799878       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:14:18.799902       1 main.go:227] handling current node
	I0610 14:14:28.803689       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0610 14:14:28.803713       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [0a431e1fa3fa2783f71bffed60eee15d9e61776eb612b32bdc0a7fb0a7b170c4] <==
	* E0610 14:10:57.803296       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0610 14:10:57.901241       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 14:10:57.901688       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0610 14:10:57.901705       1 cache.go:39] Caches are synced for autoregister controller
	I0610 14:10:57.901882       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 14:10:57.902243       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0610 14:10:58.800255       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0610 14:10:58.800282       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 14:10:58.804839       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0610 14:10:58.807411       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0610 14:10:58.807430       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0610 14:10:59.105036       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 14:10:59.133681       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0610 14:10:59.195734       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0610 14:10:59.196526       1 controller.go:609] quota admission added evaluator for: endpoints
	I0610 14:10:59.199183       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 14:10:59.568863       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 14:11:00.131668       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0610 14:11:00.703585       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0610 14:11:00.872380       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0610 14:11:15.649048       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0610 14:11:16.165057       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0610 14:11:26.527788       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0610 14:11:45.302104       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0610 14:14:21.644202       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [e153a626ee24cd1b8a58445d57c3b6c19c8a6126de9da3ebc1cefc167947e6ee] <==
	* E0610 14:11:15.794013       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	E0610 14:11:15.795299       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0610 14:11:15.860023       1 shared_informer.go:230] Caches are synced for endpoint 
	I0610 14:11:15.884908       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0610 14:11:16.160004       1 shared_informer.go:230] Caches are synced for stateful set 
	I0610 14:11:16.160004       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0610 14:11:16.163737       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0610 14:11:16.164080       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0610 14:11:16.171494       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"ef79a13c-07ad-4ac3-96e5-e583dab42d18", APIVersion:"apps/v1", ResourceVersion:"232", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-8nrpn
	I0610 14:11:16.174854       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"691e4267-a4eb-44c8-b2f8-507164466511", APIVersion:"apps/v1", ResourceVersion:"206", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-plk42
	I0610 14:11:16.181871       1 shared_informer.go:230] Caches are synced for resource quota 
	I0610 14:11:16.185544       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0610 14:11:16.186339       1 shared_informer.go:230] Caches are synced for resource quota 
	E0610 14:11:16.261864       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"ef79a13c-07ad-4ac3-96e5-e583dab42d18", ResourceVersion:"232", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63822003061, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230511-dc714da8\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0018b3fa0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0018b3fc0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0018b3fe0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00176e000), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00176e020), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00176e040), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230511-dc714da8", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00176e060)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00176e0a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0009de8c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0005ba188), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000334850), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000bdc1b0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0005ba1d0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0610 14:11:25.735645       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0610 14:11:26.521705       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"95927a91-0585-419f-9041-4cc5bf9616ed", APIVersion:"apps/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0610 14:11:26.529087       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"54c96fe5-a6a4-4036-9935-971e89e0f456", APIVersion:"apps/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-b785d
	I0610 14:11:26.567673       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"29b37822-e78c-4c1e-91fe-e0d267b6a894", APIVersion:"batch/v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-cp22k
	I0610 14:11:26.578300       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9c66ad03-f7f5-4f85-8134-9f9faf575767", APIVersion:"batch/v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-k8rxz
	I0610 14:11:29.129829       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"29b37822-e78c-4c1e-91fe-e0d267b6a894", APIVersion:"batch/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0610 14:11:29.137079       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"9c66ad03-f7f5-4f85-8134-9f9faf575767", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0610 14:14:04.922854       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"1024e1ca-a476-4b60-aacb-62d230ac1ff5", APIVersion:"apps/v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0610 14:14:04.932180       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"fd8ce1ba-bc21-478d-a994-589b0a4dc11e", APIVersion:"apps/v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-jfpld
	E0610 14:14:26.373905       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-wx9f9" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [18cc390a2770e485dceaa0df4527711e5ef1d0d0e03e3f2c4b5f7b8394b19a39] <==
	* W0610 14:11:16.742410       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0610 14:11:16.748195       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0610 14:11:16.748216       1 server_others.go:186] Using iptables Proxier.
	I0610 14:11:16.748468       1 server.go:583] Version: v1.18.20
	I0610 14:11:16.748925       1 config.go:133] Starting endpoints config controller
	I0610 14:11:16.748943       1 config.go:315] Starting service config controller
	I0610 14:11:16.748955       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0610 14:11:16.748944       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0610 14:11:16.849124       1 shared_informer.go:230] Caches are synced for service config 
	I0610 14:11:16.849142       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [2bd24f475db0e675f30a00168cd3e8f509849f5eadcc9a5881a3e689d1dade10] <==
	* I0610 14:10:57.884542       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0610 14:10:57.960274       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 14:10:57.960305       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 14:10:57.960697       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0610 14:10:57.960844       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0610 14:10:57.962605       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 14:10:57.964637       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 14:10:57.964870       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 14:10:57.964979       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 14:10:57.965031       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 14:10:57.965284       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 14:10:57.965486       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 14:10:57.965600       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 14:10:57.965756       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 14:10:57.965814       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 14:10:57.965871       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 14:10:57.966259       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 14:10:58.878344       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 14:10:58.904598       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 14:10:58.914949       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 14:10:58.978942       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 14:10:58.981992       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 14:11:02.160519       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0610 14:11:15.694807       1 factory.go:503] pod: kube-system/coredns-66bff467f8-sv6dq is already present in the active queue
	E0610 14:11:16.367483       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Jun 10 14:13:43 ingress-addon-legacy-889215 kubelet[1881]: E0610 14:13:43.077149    1881 pod_workers.go:191] Error syncing pod eccaaad3-b43d-4daf-b866-0c517ed65890 ("kube-ingress-dns-minikube_kube-system(eccaaad3-b43d-4daf-b866-0c517ed65890)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jun 10 14:13:56 ingress-addon-legacy-889215 kubelet[1881]: E0610 14:13:56.077027    1881 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jun 10 14:13:56 ingress-addon-legacy-889215 kubelet[1881]: E0610 14:13:56.077077    1881 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jun 10 14:13:56 ingress-addon-legacy-889215 kubelet[1881]: E0610 14:13:56.077130    1881 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jun 10 14:13:56 ingress-addon-legacy-889215 kubelet[1881]: E0610 14:13:56.077165    1881 pod_workers.go:191] Error syncing pod eccaaad3-b43d-4daf-b866-0c517ed65890 ("kube-ingress-dns-minikube_kube-system(eccaaad3-b43d-4daf-b866-0c517ed65890)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jun 10 14:14:04 ingress-addon-legacy-889215 kubelet[1881]: I0610 14:14:04.937901    1881 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jun 10 14:14:05 ingress-addon-legacy-889215 kubelet[1881]: I0610 14:14:05.100992    1881 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-t2pjb" (UniqueName: "kubernetes.io/secret/84e2da44-322e-4592-baec-8a2964b2f501-default-token-t2pjb") pod "hello-world-app-5f5d8b66bb-jfpld" (UID: "84e2da44-322e-4592-baec-8a2964b2f501")
	Jun 10 14:14:05 ingress-addon-legacy-889215 kubelet[1881]: W0610 14:14:05.265624    1881 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/b94fc3364222ede627d0cfabd4802885b62be84289c82d3be8f0738f3f7cfa1d/crio/crio-a647771df177a97a209ea7fa8cf1bbcaeb8d1a98ca03d4bbe9c021590201c233 WatchSource:0}: Error finding container a647771df177a97a209ea7fa8cf1bbcaeb8d1a98ca03d4bbe9c021590201c233: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc00109bd80 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Jun 10 14:14:08 ingress-addon-legacy-889215 kubelet[1881]: E0610 14:14:08.076852    1881 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jun 10 14:14:08 ingress-addon-legacy-889215 kubelet[1881]: E0610 14:14:08.076886    1881 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jun 10 14:14:08 ingress-addon-legacy-889215 kubelet[1881]: E0610 14:14:08.076930    1881 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jun 10 14:14:08 ingress-addon-legacy-889215 kubelet[1881]: E0610 14:14:08.076957    1881 pod_workers.go:191] Error syncing pod eccaaad3-b43d-4daf-b866-0c517ed65890 ("kube-ingress-dns-minikube_kube-system(eccaaad3-b43d-4daf-b866-0c517ed65890)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jun 10 14:14:20 ingress-addon-legacy-889215 kubelet[1881]: I0610 14:14:20.432690    1881 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-vt8kw" (UniqueName: "kubernetes.io/secret/eccaaad3-b43d-4daf-b866-0c517ed65890-minikube-ingress-dns-token-vt8kw") pod "eccaaad3-b43d-4daf-b866-0c517ed65890" (UID: "eccaaad3-b43d-4daf-b866-0c517ed65890")
	Jun 10 14:14:20 ingress-addon-legacy-889215 kubelet[1881]: I0610 14:14:20.434429    1881 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eccaaad3-b43d-4daf-b866-0c517ed65890-minikube-ingress-dns-token-vt8kw" (OuterVolumeSpecName: "minikube-ingress-dns-token-vt8kw") pod "eccaaad3-b43d-4daf-b866-0c517ed65890" (UID: "eccaaad3-b43d-4daf-b866-0c517ed65890"). InnerVolumeSpecName "minikube-ingress-dns-token-vt8kw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 14:14:20 ingress-addon-legacy-889215 kubelet[1881]: I0610 14:14:20.532938    1881 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-vt8kw" (UniqueName: "kubernetes.io/secret/eccaaad3-b43d-4daf-b866-0c517ed65890-minikube-ingress-dns-token-vt8kw") on node "ingress-addon-legacy-889215" DevicePath ""
	Jun 10 14:14:21 ingress-addon-legacy-889215 kubelet[1881]: W0610 14:14:21.399626    1881 pod_container_deletor.go:77] Container "dd6518b174362c3607e7e42f596632159e2ac84a35c6c418c2c4489d224dbaf9" not found in pod's containers
	Jun 10 14:14:21 ingress-addon-legacy-889215 kubelet[1881]: E0610 14:14:21.634181    1881 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-b785d.176751bdec8c0af5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-b785d", UID:"61c94a2a-3fad-4cfb-8c6a-394c988d6d2f", APIVersion:"v1", ResourceVersion:"456", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-889215"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1193f2f65b7e8f5, ext:200960953143, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1193f2f65b7e8f5, ext:200960953143, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-b785d.176751bdec8c0af5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 10 14:14:21 ingress-addon-legacy-889215 kubelet[1881]: E0610 14:14:21.637957    1881 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-b785d.176751bdec8c0af5", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-b785d", UID:"61c94a2a-3fad-4cfb-8c6a-394c988d6d2f", APIVersion:"v1", ResourceVersion:"456", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-889215"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1193f2f65b7e8f5, ext:200960953143, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1193f2f65dcffbc, ext:200963383802, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-b785d.176751bdec8c0af5" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 10 14:14:24 ingress-addon-legacy-889215 kubelet[1881]: W0610 14:14:24.404787    1881 pod_container_deletor.go:77] Container "b562ce48c2cd6a0da4cd30320854efc0e284431d557a61380a19b2c2e7bbf714" not found in pod's containers
	Jun 10 14:14:24 ingress-addon-legacy-889215 kubelet[1881]: I0610 14:14:24.466486    1881 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-q8s5b" (UniqueName: "kubernetes.io/secret/61c94a2a-3fad-4cfb-8c6a-394c988d6d2f-ingress-nginx-token-q8s5b") pod "61c94a2a-3fad-4cfb-8c6a-394c988d6d2f" (UID: "61c94a2a-3fad-4cfb-8c6a-394c988d6d2f")
	Jun 10 14:14:24 ingress-addon-legacy-889215 kubelet[1881]: I0610 14:14:24.466529    1881 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/61c94a2a-3fad-4cfb-8c6a-394c988d6d2f-webhook-cert") pod "61c94a2a-3fad-4cfb-8c6a-394c988d6d2f" (UID: "61c94a2a-3fad-4cfb-8c6a-394c988d6d2f")
	Jun 10 14:14:24 ingress-addon-legacy-889215 kubelet[1881]: I0610 14:14:24.468275    1881 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61c94a2a-3fad-4cfb-8c6a-394c988d6d2f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "61c94a2a-3fad-4cfb-8c6a-394c988d6d2f" (UID: "61c94a2a-3fad-4cfb-8c6a-394c988d6d2f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 14:14:24 ingress-addon-legacy-889215 kubelet[1881]: I0610 14:14:24.468383    1881 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61c94a2a-3fad-4cfb-8c6a-394c988d6d2f-ingress-nginx-token-q8s5b" (OuterVolumeSpecName: "ingress-nginx-token-q8s5b") pod "61c94a2a-3fad-4cfb-8c6a-394c988d6d2f" (UID: "61c94a2a-3fad-4cfb-8c6a-394c988d6d2f"). InnerVolumeSpecName "ingress-nginx-token-q8s5b". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 14:14:24 ingress-addon-legacy-889215 kubelet[1881]: I0610 14:14:24.566763    1881 reconciler.go:319] Volume detached for volume "ingress-nginx-token-q8s5b" (UniqueName: "kubernetes.io/secret/61c94a2a-3fad-4cfb-8c6a-394c988d6d2f-ingress-nginx-token-q8s5b") on node "ingress-addon-legacy-889215" DevicePath ""
	Jun 10 14:14:24 ingress-addon-legacy-889215 kubelet[1881]: I0610 14:14:24.566786    1881 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/61c94a2a-3fad-4cfb-8c6a-394c988d6d2f-webhook-cert") on node "ingress-addon-legacy-889215" DevicePath ""
	
	* 
	* ==> storage-provisioner [4bfc5576adeb6bb1e149536bd62b7674ec8cc55bcf15ea720ad01df5124e94ec] <==
	* I0610 14:11:22.806167       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 14:11:22.813435       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 14:11:22.813480       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 14:11:22.818901       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 14:11:22.819051       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-889215_a6cd93f5-7076-40a4-8df8-16e25d6b02aa!
	I0610 14:11:22.819062       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2df117d-9e77-4f93-bc45-6847f9a7afc1", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-889215_a6cd93f5-7076-40a4-8df8-16e25d6b02aa became leader
	I0610 14:11:22.919519       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-889215_a6cd93f5-7076-40a4-8df8-16e25d6b02aa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-889215 -n ingress-addon-legacy-889215
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-889215 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (172.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-6nqgr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-6nqgr -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-6nqgr -- sh -c "ping -c 1 192.168.58.1": exit status 1 (155.080894ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-6nqgr): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-r6l8p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-r6l8p -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-r6l8p -- sh -c "ping -c 1 192.168.58.1": exit status 1 (158.490805ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-r6l8p): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-007346
helpers_test.go:235: (dbg) docker inspect multinode-007346:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd",
	        "Created": "2023-06-10T14:19:26.780401874Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 109574,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-10T14:19:27.046114427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8b39c0c6b43e13425df6546d3707123c5158cae4cca961fab19bf263071fc26b",
	        "ResolvConfPath": "/var/lib/docker/containers/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd/hosts",
	        "LogPath": "/var/lib/docker/containers/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd-json.log",
	        "Name": "/multinode-007346",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-007346:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-007346",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/38a2647b4df2479ff5d799a32a9223c7fc6b3486e5c993a159a8b8ca8b432da8-init/diff:/var/lib/docker/overlay2/0dc1ddb6d62b4bee9beafd5f34260acd069d63ff74f1b10678aeef7f32badeb3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38a2647b4df2479ff5d799a32a9223c7fc6b3486e5c993a159a8b8ca8b432da8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38a2647b4df2479ff5d799a32a9223c7fc6b3486e5c993a159a8b8ca8b432da8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38a2647b4df2479ff5d799a32a9223c7fc6b3486e5c993a159a8b8ca8b432da8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-007346",
	                "Source": "/var/lib/docker/volumes/multinode-007346/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-007346",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-007346",
	                "name.minikube.sigs.k8s.io": "multinode-007346",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c460d45bf7e07261257129aee8f409ba556d93c2effbe94d61a2d4e20d9d0214",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c460d45bf7e0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-007346": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2e604f00710c",
	                        "multinode-007346"
	                    ],
	                    "NetworkID": "9531ca0eb7f77adf7e4a50c549797a563439f3db2633a53cef2ac8a21b5c5969",
	                    "EndpointID": "4f7132bc82240342257c0fe2c2ee7124a6c6e3b9f67436f7ca3ab54d66206a53",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-007346 -n multinode-007346
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-007346 logs -n 25: (1.338614101s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-195597                           | mount-start-2-195597 | jenkins | v1.30.1 | 10 Jun 23 14:19 UTC | 10 Jun 23 14:19 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-195597 ssh -- ls                    | mount-start-2-195597 | jenkins | v1.30.1 | 10 Jun 23 14:19 UTC | 10 Jun 23 14:19 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-184990                           | mount-start-1-184990 | jenkins | v1.30.1 | 10 Jun 23 14:19 UTC | 10 Jun 23 14:19 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-195597 ssh -- ls                    | mount-start-2-195597 | jenkins | v1.30.1 | 10 Jun 23 14:19 UTC | 10 Jun 23 14:19 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-195597                           | mount-start-2-195597 | jenkins | v1.30.1 | 10 Jun 23 14:19 UTC | 10 Jun 23 14:19 UTC |
	| start   | -p mount-start-2-195597                           | mount-start-2-195597 | jenkins | v1.30.1 | 10 Jun 23 14:19 UTC | 10 Jun 23 14:19 UTC |
	| ssh     | mount-start-2-195597 ssh -- ls                    | mount-start-2-195597 | jenkins | v1.30.1 | 10 Jun 23 14:19 UTC | 10 Jun 23 14:19 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-195597                           | mount-start-2-195597 | jenkins | v1.30.1 | 10 Jun 23 14:19 UTC | 10 Jun 23 14:19 UTC |
	| delete  | -p mount-start-1-184990                           | mount-start-1-184990 | jenkins | v1.30.1 | 10 Jun 23 14:19 UTC | 10 Jun 23 14:19 UTC |
	| start   | -p multinode-007346                               | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:19 UTC | 10 Jun 23 14:21 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- apply -f                   | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- rollout                    | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- get pods -o                | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- get pods -o                | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- exec                       | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | busybox-67b7f59bb-6nqgr --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- exec                       | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | busybox-67b7f59bb-r6l8p --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- exec                       | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | busybox-67b7f59bb-6nqgr --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- exec                       | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | busybox-67b7f59bb-r6l8p --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- exec                       | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | busybox-67b7f59bb-6nqgr -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- exec                       | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | busybox-67b7f59bb-r6l8p -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- get pods -o                | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- exec                       | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | busybox-67b7f59bb-6nqgr                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- exec                       | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC |                     |
	|         | busybox-67b7f59bb-6nqgr -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- exec                       | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC | 10 Jun 23 14:21 UTC |
	|         | busybox-67b7f59bb-r6l8p                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-007346 -- exec                       | multinode-007346     | jenkins | v1.30.1 | 10 Jun 23 14:21 UTC |                     |
	|         | busybox-67b7f59bb-r6l8p -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 14:19:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 14:19:21.162249  108966 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:19:21.162374  108966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:19:21.162383  108966 out.go:309] Setting ErrFile to fd 2...
	I0610 14:19:21.162387  108966 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:19:21.162495  108966 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	I0610 14:19:21.163015  108966 out.go:303] Setting JSON to false
	I0610 14:19:21.164052  108966 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7316,"bootTime":1686399445,"procs":490,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 14:19:21.164106  108966 start.go:137] virtualization: kvm guest
	I0610 14:19:21.166764  108966 out.go:177] * [multinode-007346] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 14:19:21.169016  108966 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 14:19:21.169034  108966 notify.go:220] Checking for updates...
	I0610 14:19:21.170617  108966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 14:19:21.172398  108966 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:19:21.174032  108966 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	I0610 14:19:21.176103  108966 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 14:19:21.178938  108966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 14:19:21.180631  108966 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 14:19:21.201881  108966 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 14:19:21.201980  108966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:19:21.252415  108966 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-06-10 14:19:21.243511696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:19:21.252510  108966 docker.go:294] overlay module found
	I0610 14:19:21.254600  108966 out.go:177] * Using the docker driver based on user configuration
	I0610 14:19:21.256429  108966 start.go:297] selected driver: docker
	I0610 14:19:21.256441  108966 start.go:875] validating driver "docker" against <nil>
	I0610 14:19:21.256451  108966 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 14:19:21.257131  108966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:19:21.302129  108966 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:36 SystemTime:2023-06-10 14:19:21.294506849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:19:21.302292  108966 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 14:19:21.302483  108966 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 14:19:21.304597  108966 out.go:177] * Using Docker driver with root privileges
	I0610 14:19:21.306499  108966 cni.go:84] Creating CNI manager for ""
	I0610 14:19:21.306520  108966 cni.go:136] 0 nodes found, recommending kindnet
	I0610 14:19:21.306527  108966 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 14:19:21.306537  108966 start_flags.go:319] config:
	{Name:multinode-007346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-007346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:19:21.308217  108966 out.go:177] * Starting control plane node multinode-007346 in cluster multinode-007346
	I0610 14:19:21.309807  108966 cache.go:122] Beginning downloading kic base image for docker with crio
	I0610 14:19:21.311283  108966 out.go:177] * Pulling base image ...
	I0610 14:19:21.312698  108966 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0610 14:19:21.312728  108966 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 14:19:21.312738  108966 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4
	I0610 14:19:21.312746  108966 cache.go:57] Caching tarball of preloaded images
	I0610 14:19:21.312835  108966 preload.go:174] Found /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 14:19:21.312850  108966 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0610 14:19:21.313143  108966 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/config.json ...
	I0610 14:19:21.313164  108966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/config.json: {Name:mk280d87ef35ef018590bd1d7e73c8c000a9aed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:19:21.326931  108966 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0610 14:19:21.326950  108966 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0610 14:19:21.326965  108966 cache.go:195] Successfully downloaded all kic artifacts
	I0610 14:19:21.326987  108966 start.go:364] acquiring machines lock for multinode-007346: {Name:mk05dee24c78b04c6defaa2658f86e16fa2fed05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:19:21.327071  108966 start.go:368] acquired machines lock for "multinode-007346" in 62.973µs
	I0610 14:19:21.327096  108966 start.go:93] Provisioning new machine with config: &{Name:multinode-007346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-007346 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 14:19:21.327182  108966 start.go:125] createHost starting for "" (driver="docker")
	I0610 14:19:21.329162  108966 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0610 14:19:21.329354  108966 start.go:159] libmachine.API.Create for "multinode-007346" (driver="docker")
	I0610 14:19:21.329378  108966 client.go:168] LocalClient.Create starting
	I0610 14:19:21.329470  108966 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem
	I0610 14:19:21.329506  108966 main.go:141] libmachine: Decoding PEM data...
	I0610 14:19:21.329524  108966 main.go:141] libmachine: Parsing certificate...
	I0610 14:19:21.329576  108966 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem
	I0610 14:19:21.329607  108966 main.go:141] libmachine: Decoding PEM data...
	I0610 14:19:21.329618  108966 main.go:141] libmachine: Parsing certificate...
	I0610 14:19:21.329894  108966 cli_runner.go:164] Run: docker network inspect multinode-007346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0610 14:19:21.344172  108966 cli_runner.go:211] docker network inspect multinode-007346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0610 14:19:21.344254  108966 network_create.go:281] running [docker network inspect multinode-007346] to gather additional debugging logs...
	I0610 14:19:21.344275  108966 cli_runner.go:164] Run: docker network inspect multinode-007346
	W0610 14:19:21.358147  108966 cli_runner.go:211] docker network inspect multinode-007346 returned with exit code 1
	I0610 14:19:21.358170  108966 network_create.go:284] error running [docker network inspect multinode-007346]: docker network inspect multinode-007346: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-007346 not found
	I0610 14:19:21.358189  108966 network_create.go:286] output of [docker network inspect multinode-007346]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-007346 not found
	
	** /stderr **
	I0610 14:19:21.358242  108966 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 14:19:21.372945  108966 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c96b3e433254 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:4c:f2:39:d0} reservation:<nil>}
	I0610 14:19:21.373373  108966 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00138d650}
	I0610 14:19:21.373398  108966 network_create.go:123] attempt to create docker network multinode-007346 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0610 14:19:21.373431  108966 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-007346 multinode-007346
	I0610 14:19:21.424586  108966 network_create.go:107] docker network multinode-007346 192.168.58.0/24 created
	I0610 14:19:21.424614  108966 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-007346" container
	I0610 14:19:21.424674  108966 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0610 14:19:21.438716  108966 cli_runner.go:164] Run: docker volume create multinode-007346 --label name.minikube.sigs.k8s.io=multinode-007346 --label created_by.minikube.sigs.k8s.io=true
	I0610 14:19:21.454846  108966 oci.go:103] Successfully created a docker volume multinode-007346
	I0610 14:19:21.454900  108966 cli_runner.go:164] Run: docker run --rm --name multinode-007346-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-007346 --entrypoint /usr/bin/test -v multinode-007346:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib
	I0610 14:19:21.963886  108966 oci.go:107] Successfully prepared a docker volume multinode-007346
	I0610 14:19:21.963937  108966 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0610 14:19:21.963960  108966 kic.go:190] Starting extracting preloaded images to volume ...
	I0610 14:19:21.964028  108966 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-007346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir
	I0610 14:19:26.721080  108966 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-007346:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir: (4.756995273s)
	I0610 14:19:26.721108  108966 kic.go:199] duration metric: took 4.757146 seconds to extract preloaded images to volume
	W0610 14:19:26.721239  108966 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0610 14:19:26.721328  108966 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0610 14:19:26.766524  108966 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-007346 --name multinode-007346 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-007346 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-007346 --network multinode-007346 --ip 192.168.58.2 --volume multinode-007346:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
	I0610 14:19:27.055418  108966 cli_runner.go:164] Run: docker container inspect multinode-007346 --format={{.State.Running}}
	I0610 14:19:27.071612  108966 cli_runner.go:164] Run: docker container inspect multinode-007346 --format={{.State.Status}}
	I0610 14:19:27.090447  108966 cli_runner.go:164] Run: docker exec multinode-007346 stat /var/lib/dpkg/alternatives/iptables
	I0610 14:19:27.153208  108966 oci.go:144] the created container "multinode-007346" has a running status.
	I0610 14:19:27.153236  108966 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa...
	I0610 14:19:27.361959  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0610 14:19:27.362009  108966 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0610 14:19:27.383501  108966 cli_runner.go:164] Run: docker container inspect multinode-007346 --format={{.State.Status}}
	I0610 14:19:27.401808  108966 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0610 14:19:27.401830  108966 kic_runner.go:114] Args: [docker exec --privileged multinode-007346 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0610 14:19:27.472410  108966 cli_runner.go:164] Run: docker container inspect multinode-007346 --format={{.State.Status}}
	I0610 14:19:27.493293  108966 machine.go:88] provisioning docker machine ...
	I0610 14:19:27.493340  108966 ubuntu.go:169] provisioning hostname "multinode-007346"
	I0610 14:19:27.493403  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:19:27.512872  108966 main.go:141] libmachine: Using SSH client type: native
	I0610 14:19:27.513554  108966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0610 14:19:27.513581  108966 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-007346 && echo "multinode-007346" | sudo tee /etc/hostname
	I0610 14:19:27.773049  108966 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-007346
	
	I0610 14:19:27.773130  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:19:27.790122  108966 main.go:141] libmachine: Using SSH client type: native
	I0610 14:19:27.790692  108966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0610 14:19:27.790713  108966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-007346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-007346/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-007346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 14:19:27.909840  108966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 14:19:27.909865  108966 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15074-18675/.minikube CaCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15074-18675/.minikube}
	I0610 14:19:27.909892  108966 ubuntu.go:177] setting up certificates
	I0610 14:19:27.909901  108966 provision.go:83] configureAuth start
	I0610 14:19:27.909950  108966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-007346
	I0610 14:19:27.925414  108966 provision.go:138] copyHostCerts
	I0610 14:19:27.925449  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem
	I0610 14:19:27.925475  108966 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem, removing ...
	I0610 14:19:27.925484  108966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem
	I0610 14:19:27.925552  108966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem (1078 bytes)
	I0610 14:19:27.925626  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem
	I0610 14:19:27.925646  108966 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem, removing ...
	I0610 14:19:27.925654  108966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem
	I0610 14:19:27.925681  108966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem (1123 bytes)
	I0610 14:19:27.925731  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem
	I0610 14:19:27.925754  108966 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem, removing ...
	I0610 14:19:27.925761  108966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem
	I0610 14:19:27.925784  108966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem (1675 bytes)
	I0610 14:19:27.925838  108966 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem org=jenkins.multinode-007346 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-007346]
	I0610 14:19:28.144039  108966 provision.go:172] copyRemoteCerts
	I0610 14:19:28.144103  108966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 14:19:28.144138  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:19:28.159554  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa Username:docker}
	I0610 14:19:28.245666  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 14:19:28.245716  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0610 14:19:28.265245  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 14:19:28.265283  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 14:19:28.284126  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 14:19:28.284172  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 14:19:28.302726  108966 provision.go:86] duration metric: configureAuth took 392.815181ms
	I0610 14:19:28.302746  108966 ubuntu.go:193] setting minikube options for container-runtime
	I0610 14:19:28.302881  108966 config.go:182] Loaded profile config "multinode-007346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:19:28.302969  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:19:28.318474  108966 main.go:141] libmachine: Using SSH client type: native
	I0610 14:19:28.318860  108966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0610 14:19:28.318887  108966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 14:19:28.508623  108966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 14:19:28.508651  108966 machine.go:91] provisioned docker machine in 1.015329132s
	I0610 14:19:28.508662  108966 client.go:171] LocalClient.Create took 7.17927444s
	I0610 14:19:28.508685  108966 start.go:167] duration metric: libmachine.API.Create for "multinode-007346" took 7.179329284s
	I0610 14:19:28.508697  108966 start.go:300] post-start starting for "multinode-007346" (driver="docker")
	I0610 14:19:28.508704  108966 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 14:19:28.508764  108966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 14:19:28.508802  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:19:28.524437  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa Username:docker}
	I0610 14:19:28.610050  108966 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 14:19:28.612590  108966 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0610 14:19:28.612607  108966 command_runner.go:130] > NAME="Ubuntu"
	I0610 14:19:28.612615  108966 command_runner.go:130] > VERSION_ID="22.04"
	I0610 14:19:28.612622  108966 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0610 14:19:28.612629  108966 command_runner.go:130] > VERSION_CODENAME=jammy
	I0610 14:19:28.612635  108966 command_runner.go:130] > ID=ubuntu
	I0610 14:19:28.612641  108966 command_runner.go:130] > ID_LIKE=debian
	I0610 14:19:28.612648  108966 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0610 14:19:28.612660  108966 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0610 14:19:28.612674  108966 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0610 14:19:28.612689  108966 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0610 14:19:28.612699  108966 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0610 14:19:28.612773  108966 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0610 14:19:28.612808  108966 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0610 14:19:28.612826  108966 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0610 14:19:28.612837  108966 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0610 14:19:28.612850  108966 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/addons for local assets ...
	I0610 14:19:28.612905  108966 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/files for local assets ...
	I0610 14:19:28.612990  108966 filesync.go:149] local asset: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem -> 254852.pem in /etc/ssl/certs
	I0610 14:19:28.613000  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem -> /etc/ssl/certs/254852.pem
	I0610 14:19:28.613096  108966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 14:19:28.620013  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem --> /etc/ssl/certs/254852.pem (1708 bytes)
	I0610 14:19:28.639096  108966 start.go:303] post-start completed in 130.388075ms
	I0610 14:19:28.639413  108966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-007346
	I0610 14:19:28.654431  108966 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/config.json ...
	I0610 14:19:28.654620  108966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 14:19:28.654655  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:19:28.669503  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa Username:docker}
	I0610 14:19:28.750397  108966 command_runner.go:130] > 20%!
	(MISSING)I0610 14:19:28.750463  108966 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0610 14:19:28.754009  108966 command_runner.go:130] > 233G
	I0610 14:19:28.754147  108966 start.go:128] duration metric: createHost completed in 7.426953312s
	I0610 14:19:28.754168  108966 start.go:83] releasing machines lock for "multinode-007346", held for 7.42708396s
	I0610 14:19:28.754245  108966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-007346
	I0610 14:19:28.772487  108966 ssh_runner.go:195] Run: cat /version.json
	I0610 14:19:28.772522  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:19:28.772562  108966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 14:19:28.772623  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:19:28.788654  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa Username:docker}
	I0610 14:19:28.789011  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa Username:docker}
	I0610 14:19:28.869229  108966 command_runner.go:130] > {"iso_version": "v1.30.1-1685960108-16634", "kicbase_version": "v0.0.39-1686006988-16632", "minikube_version": "v1.30.1", "commit": "c89c641dc1414caa3b81ed2a4c7748b897639468"}
	I0610 14:19:28.951205  108966 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 14:19:28.953206  108966 ssh_runner.go:195] Run: systemctl --version
	I0610 14:19:28.956930  108966 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0610 14:19:28.956970  108966 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0610 14:19:28.957032  108966 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 14:19:29.092988  108966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 14:19:29.096691  108966 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0610 14:19:29.096713  108966 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0610 14:19:29.096721  108966 command_runner.go:130] > Device: 37h/55d	Inode: 801599      Links: 1
	I0610 14:19:29.096727  108966 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 14:19:29.096733  108966 command_runner.go:130] > Access: 2023-04-04 14:31:21.000000000 +0000
	I0610 14:19:29.096738  108966 command_runner.go:130] > Modify: 2023-04-04 14:31:21.000000000 +0000
	I0610 14:19:29.096742  108966 command_runner.go:130] > Change: 2023-06-10 14:01:36.108366698 +0000
	I0610 14:19:29.096747  108966 command_runner.go:130] >  Birth: 2023-06-10 14:01:36.108366698 +0000
	I0610 14:19:29.096899  108966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:19:29.113693  108966 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0610 14:19:29.113761  108966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:19:29.138579  108966 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0610 14:19:29.138611  108966 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0610 14:19:29.138618  108966 start.go:481] detecting cgroup driver to use...
	I0610 14:19:29.138649  108966 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0610 14:19:29.138697  108966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 14:19:29.150978  108966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 14:19:29.159863  108966 docker.go:193] disabling cri-docker service (if available) ...
	I0610 14:19:29.159927  108966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 14:19:29.171416  108966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 14:19:29.183296  108966 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 14:19:29.252952  108966 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 14:19:29.328656  108966 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0610 14:19:29.328697  108966 docker.go:209] disabling docker service ...
	I0610 14:19:29.328735  108966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 14:19:29.344253  108966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 14:19:29.353639  108966 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 14:19:29.431004  108966 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0610 14:19:29.431068  108966 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 14:19:29.440666  108966 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0610 14:19:29.506397  108966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 14:19:29.515922  108966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 14:19:29.528362  108966 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0610 14:19:29.529137  108966 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 14:19:29.529183  108966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:19:29.537095  108966 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 14:19:29.537151  108966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:19:29.544818  108966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:19:29.552395  108966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:19:29.559897  108966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 14:19:29.567144  108966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 14:19:29.573123  108966 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 14:19:29.573736  108966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 14:19:29.580500  108966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 14:19:29.646348  108966 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 14:19:29.741214  108966 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 14:19:29.741288  108966 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 14:19:29.744380  108966 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0610 14:19:29.744403  108966 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 14:19:29.744418  108966 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0610 14:19:29.744435  108966 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 14:19:29.744448  108966 command_runner.go:130] > Access: 2023-06-10 14:19:29.726910893 +0000
	I0610 14:19:29.744457  108966 command_runner.go:130] > Modify: 2023-06-10 14:19:29.726910893 +0000
	I0610 14:19:29.744470  108966 command_runner.go:130] > Change: 2023-06-10 14:19:29.726910893 +0000
	I0610 14:19:29.744479  108966 command_runner.go:130] >  Birth: -
	I0610 14:19:29.744502  108966 start.go:549] Will wait 60s for crictl version
	I0610 14:19:29.744549  108966 ssh_runner.go:195] Run: which crictl
	I0610 14:19:29.747411  108966 command_runner.go:130] > /usr/bin/crictl
	I0610 14:19:29.747546  108966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 14:19:29.778032  108966 command_runner.go:130] > Version:  0.1.0
	I0610 14:19:29.778054  108966 command_runner.go:130] > RuntimeName:  cri-o
	I0610 14:19:29.778059  108966 command_runner.go:130] > RuntimeVersion:  1.24.5
	I0610 14:19:29.778064  108966 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 14:19:29.778080  108966 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0610 14:19:29.778144  108966 ssh_runner.go:195] Run: crio --version
	I0610 14:19:29.807672  108966 command_runner.go:130] > crio version 1.24.5
	I0610 14:19:29.807696  108966 command_runner.go:130] > Version:          1.24.5
	I0610 14:19:29.807709  108966 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0610 14:19:29.807720  108966 command_runner.go:130] > GitTreeState:     clean
	I0610 14:19:29.807727  108966 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0610 14:19:29.807735  108966 command_runner.go:130] > GoVersion:        go1.18.2
	I0610 14:19:29.807741  108966 command_runner.go:130] > Compiler:         gc
	I0610 14:19:29.807746  108966 command_runner.go:130] > Platform:         linux/amd64
	I0610 14:19:29.807753  108966 command_runner.go:130] > Linkmode:         dynamic
	I0610 14:19:29.807760  108966 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0610 14:19:29.807769  108966 command_runner.go:130] > SeccompEnabled:   true
	I0610 14:19:29.807782  108966 command_runner.go:130] > AppArmorEnabled:  false
	I0610 14:19:29.808945  108966 ssh_runner.go:195] Run: crio --version
	I0610 14:19:29.838366  108966 command_runner.go:130] > crio version 1.24.5
	I0610 14:19:29.838389  108966 command_runner.go:130] > Version:          1.24.5
	I0610 14:19:29.838400  108966 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0610 14:19:29.838406  108966 command_runner.go:130] > GitTreeState:     clean
	I0610 14:19:29.838415  108966 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0610 14:19:29.838422  108966 command_runner.go:130] > GoVersion:        go1.18.2
	I0610 14:19:29.838432  108966 command_runner.go:130] > Compiler:         gc
	I0610 14:19:29.838442  108966 command_runner.go:130] > Platform:         linux/amd64
	I0610 14:19:29.838450  108966 command_runner.go:130] > Linkmode:         dynamic
	I0610 14:19:29.838466  108966 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0610 14:19:29.838476  108966 command_runner.go:130] > SeccompEnabled:   true
	I0610 14:19:29.838486  108966 command_runner.go:130] > AppArmorEnabled:  false
	I0610 14:19:29.842052  108966 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0610 14:19:29.843655  108966 cli_runner.go:164] Run: docker network inspect multinode-007346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 14:19:29.858800  108966 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0610 14:19:29.861962  108966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 14:19:29.871620  108966 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0610 14:19:29.871681  108966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 14:19:29.915060  108966 command_runner.go:130] > {
	I0610 14:19:29.915081  108966 command_runner.go:130] >   "images": [
	I0610 14:19:29.915087  108966 command_runner.go:130] >     {
	I0610 14:19:29.915099  108966 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0610 14:19:29.915107  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.915115  108966 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0610 14:19:29.915121  108966 command_runner.go:130] >       ],
	I0610 14:19:29.915128  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.915141  108966 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0610 14:19:29.915156  108966 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0610 14:19:29.915165  108966 command_runner.go:130] >       ],
	I0610 14:19:29.915173  108966 command_runner.go:130] >       "size": "65249302",
	I0610 14:19:29.915182  108966 command_runner.go:130] >       "uid": null,
	I0610 14:19:29.915189  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.915199  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.915204  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.915220  108966 command_runner.go:130] >     },
	I0610 14:19:29.915226  108966 command_runner.go:130] >     {
	I0610 14:19:29.915240  108966 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0610 14:19:29.915250  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.915258  108966 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0610 14:19:29.915267  108966 command_runner.go:130] >       ],
	I0610 14:19:29.915274  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.915288  108966 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0610 14:19:29.915299  108966 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0610 14:19:29.915306  108966 command_runner.go:130] >       ],
	I0610 14:19:29.915319  108966 command_runner.go:130] >       "size": "31470524",
	I0610 14:19:29.915329  108966 command_runner.go:130] >       "uid": null,
	I0610 14:19:29.915338  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.915348  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.915355  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.915436  108966 command_runner.go:130] >     },
	I0610 14:19:29.915464  108966 command_runner.go:130] >     {
	I0610 14:19:29.915475  108966 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0610 14:19:29.915487  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.915498  108966 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0610 14:19:29.915507  108966 command_runner.go:130] >       ],
	I0610 14:19:29.915514  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.915529  108966 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0610 14:19:29.915541  108966 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0610 14:19:29.915550  108966 command_runner.go:130] >       ],
	I0610 14:19:29.915557  108966 command_runner.go:130] >       "size": "53621675",
	I0610 14:19:29.915567  108966 command_runner.go:130] >       "uid": null,
	I0610 14:19:29.915575  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.915588  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.915598  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.915606  108966 command_runner.go:130] >     },
	I0610 14:19:29.915611  108966 command_runner.go:130] >     {
	I0610 14:19:29.915624  108966 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0610 14:19:29.915634  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.915639  108966 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0610 14:19:29.915647  108966 command_runner.go:130] >       ],
	I0610 14:19:29.915654  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.915669  108966 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0610 14:19:29.915684  108966 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0610 14:19:29.915701  108966 command_runner.go:130] >       ],
	I0610 14:19:29.915711  108966 command_runner.go:130] >       "size": "297083935",
	I0610 14:19:29.915721  108966 command_runner.go:130] >       "uid": {
	I0610 14:19:29.915727  108966 command_runner.go:130] >         "value": "0"
	I0610 14:19:29.915734  108966 command_runner.go:130] >       },
	I0610 14:19:29.915738  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.915748  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.915758  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.915764  108966 command_runner.go:130] >     },
	I0610 14:19:29.915774  108966 command_runner.go:130] >     {
	I0610 14:19:29.915788  108966 command_runner.go:130] >       "id": "c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370",
	I0610 14:19:29.915797  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.915808  108966 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.2"
	I0610 14:19:29.915816  108966 command_runner.go:130] >       ],
	I0610 14:19:29.915822  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.915833  108966 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9",
	I0610 14:19:29.915846  108966 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:95388fe585f1d6f65d414678042a281f80593e78cabaeeb8520a0873ebbb54f2"
	I0610 14:19:29.915856  108966 command_runner.go:130] >       ],
	I0610 14:19:29.915865  108966 command_runner.go:130] >       "size": "122053574",
	I0610 14:19:29.915875  108966 command_runner.go:130] >       "uid": {
	I0610 14:19:29.915883  108966 command_runner.go:130] >         "value": "0"
	I0610 14:19:29.915892  108966 command_runner.go:130] >       },
	I0610 14:19:29.915899  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.915908  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.915915  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.915923  108966 command_runner.go:130] >     },
	I0610 14:19:29.915928  108966 command_runner.go:130] >     {
	I0610 14:19:29.915940  108966 command_runner.go:130] >       "id": "ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12",
	I0610 14:19:29.915950  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.915959  108966 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.2"
	I0610 14:19:29.915968  108966 command_runner.go:130] >       ],
	I0610 14:19:29.915975  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.915989  108966 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:279461bc1c0b4753dc83677a927b9f7827012b3adbcaa5df9dfd4af8b0987bc6",
	I0610 14:19:29.916081  108966 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"
	I0610 14:19:29.916096  108966 command_runner.go:130] >       ],
	I0610 14:19:29.916107  108966 command_runner.go:130] >       "size": "113906988",
	I0610 14:19:29.916115  108966 command_runner.go:130] >       "uid": {
	I0610 14:19:29.916125  108966 command_runner.go:130] >         "value": "0"
	I0610 14:19:29.916133  108966 command_runner.go:130] >       },
	I0610 14:19:29.916141  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.916147  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.916158  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.916164  108966 command_runner.go:130] >     },
	I0610 14:19:29.916173  108966 command_runner.go:130] >     {
	I0610 14:19:29.916184  108966 command_runner.go:130] >       "id": "b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee",
	I0610 14:19:29.916195  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.916206  108966 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.2"
	I0610 14:19:29.916215  108966 command_runner.go:130] >       ],
	I0610 14:19:29.916222  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.916233  108966 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f",
	I0610 14:19:29.916248  108966 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:931b8fa2393b7e2a926afbfd24784153760b999ddbf2059f2cb652510ecdef83"
	I0610 14:19:29.916257  108966 command_runner.go:130] >       ],
	I0610 14:19:29.916265  108966 command_runner.go:130] >       "size": "72709527",
	I0610 14:19:29.916271  108966 command_runner.go:130] >       "uid": null,
	I0610 14:19:29.916284  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.916291  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.916300  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.916306  108966 command_runner.go:130] >     },
	I0610 14:19:29.916316  108966 command_runner.go:130] >     {
	I0610 14:19:29.916326  108966 command_runner.go:130] >       "id": "89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0",
	I0610 14:19:29.916335  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.916361  108966 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.2"
	I0610 14:19:29.916370  108966 command_runner.go:130] >       ],
	I0610 14:19:29.916380  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.916439  108966 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177",
	I0610 14:19:29.916455  108966 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f8be7505892d1671a15afa3ac6c3b31e50da87dd59a4745e30a5b3f9f584ee6e"
	I0610 14:19:29.916466  108966 command_runner.go:130] >       ],
	I0610 14:19:29.916473  108966 command_runner.go:130] >       "size": "59802924",
	I0610 14:19:29.916480  108966 command_runner.go:130] >       "uid": {
	I0610 14:19:29.916487  108966 command_runner.go:130] >         "value": "0"
	I0610 14:19:29.916492  108966 command_runner.go:130] >       },
	I0610 14:19:29.916500  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.916506  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.916517  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.916522  108966 command_runner.go:130] >     },
	I0610 14:19:29.916531  108966 command_runner.go:130] >     {
	I0610 14:19:29.916541  108966 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0610 14:19:29.916551  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.916561  108966 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0610 14:19:29.916570  108966 command_runner.go:130] >       ],
	I0610 14:19:29.916581  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.916594  108966 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0610 14:19:29.916609  108966 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0610 14:19:29.916618  108966 command_runner.go:130] >       ],
	I0610 14:19:29.916627  108966 command_runner.go:130] >       "size": "750414",
	I0610 14:19:29.916637  108966 command_runner.go:130] >       "uid": {
	I0610 14:19:29.916646  108966 command_runner.go:130] >         "value": "65535"
	I0610 14:19:29.916653  108966 command_runner.go:130] >       },
	I0610 14:19:29.916658  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.916667  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.916678  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.916683  108966 command_runner.go:130] >     }
	I0610 14:19:29.916692  108966 command_runner.go:130] >   ]
	I0610 14:19:29.916701  108966 command_runner.go:130] > }
	I0610 14:19:29.917359  108966 crio.go:496] all images are preloaded for cri-o runtime.
	I0610 14:19:29.917373  108966 crio.go:415] Images already preloaded, skipping extraction
	I0610 14:19:29.917410  108966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 14:19:29.945070  108966 command_runner.go:130] > {
	I0610 14:19:29.945090  108966 command_runner.go:130] >   "images": [
	I0610 14:19:29.945096  108966 command_runner.go:130] >     {
	I0610 14:19:29.945105  108966 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0610 14:19:29.945110  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.945116  108966 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0610 14:19:29.945120  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945124  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.945136  108966 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0610 14:19:29.945150  108966 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0610 14:19:29.945156  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945167  108966 command_runner.go:130] >       "size": "65249302",
	I0610 14:19:29.945177  108966 command_runner.go:130] >       "uid": null,
	I0610 14:19:29.945186  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.945197  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.945206  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.945213  108966 command_runner.go:130] >     },
	I0610 14:19:29.945216  108966 command_runner.go:130] >     {
	I0610 14:19:29.945225  108966 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0610 14:19:29.945234  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.945247  108966 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0610 14:19:29.945253  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945260  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.945272  108966 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0610 14:19:29.945287  108966 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0610 14:19:29.945296  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945306  108966 command_runner.go:130] >       "size": "31470524",
	I0610 14:19:29.945314  108966 command_runner.go:130] >       "uid": null,
	I0610 14:19:29.945321  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.945331  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.945341  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.945350  108966 command_runner.go:130] >     },
	I0610 14:19:29.945357  108966 command_runner.go:130] >     {
	I0610 14:19:29.945370  108966 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0610 14:19:29.945379  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.945391  108966 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0610 14:19:29.945397  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945404  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.945419  108966 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0610 14:19:29.945434  108966 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0610 14:19:29.945443  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945453  108966 command_runner.go:130] >       "size": "53621675",
	I0610 14:19:29.945462  108966 command_runner.go:130] >       "uid": null,
	I0610 14:19:29.945472  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.945479  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.945483  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.945487  108966 command_runner.go:130] >     },
	I0610 14:19:29.945497  108966 command_runner.go:130] >     {
	I0610 14:19:29.945511  108966 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0610 14:19:29.945521  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.945532  108966 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0610 14:19:29.945540  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945550  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.945563  108966 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0610 14:19:29.945572  108966 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0610 14:19:29.945582  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945593  108966 command_runner.go:130] >       "size": "297083935",
	I0610 14:19:29.945602  108966 command_runner.go:130] >       "uid": {
	I0610 14:19:29.945612  108966 command_runner.go:130] >         "value": "0"
	I0610 14:19:29.945618  108966 command_runner.go:130] >       },
	I0610 14:19:29.945625  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.945634  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.945644  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.945652  108966 command_runner.go:130] >     },
	I0610 14:19:29.945655  108966 command_runner.go:130] >     {
	I0610 14:19:29.945665  108966 command_runner.go:130] >       "id": "c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370",
	I0610 14:19:29.945675  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.945684  108966 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.2"
	I0610 14:19:29.945690  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945700  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.945714  108966 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9",
	I0610 14:19:29.945730  108966 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:95388fe585f1d6f65d414678042a281f80593e78cabaeeb8520a0873ebbb54f2"
	I0610 14:19:29.945738  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945742  108966 command_runner.go:130] >       "size": "122053574",
	I0610 14:19:29.945746  108966 command_runner.go:130] >       "uid": {
	I0610 14:19:29.945756  108966 command_runner.go:130] >         "value": "0"
	I0610 14:19:29.945766  108966 command_runner.go:130] >       },
	I0610 14:19:29.945776  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.945785  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.945795  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.945803  108966 command_runner.go:130] >     },
	I0610 14:19:29.945813  108966 command_runner.go:130] >     {
	I0610 14:19:29.945824  108966 command_runner.go:130] >       "id": "ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12",
	I0610 14:19:29.945828  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.945839  108966 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.2"
	I0610 14:19:29.945849  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945859  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.945874  108966 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:279461bc1c0b4753dc83677a927b9f7827012b3adbcaa5df9dfd4af8b0987bc6",
	I0610 14:19:29.945889  108966 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"
	I0610 14:19:29.945898  108966 command_runner.go:130] >       ],
	I0610 14:19:29.945912  108966 command_runner.go:130] >       "size": "113906988",
	I0610 14:19:29.945919  108966 command_runner.go:130] >       "uid": {
	I0610 14:19:29.945926  108966 command_runner.go:130] >         "value": "0"
	I0610 14:19:29.945932  108966 command_runner.go:130] >       },
	I0610 14:19:29.945943  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.945949  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.945959  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.945965  108966 command_runner.go:130] >     },
	I0610 14:19:29.945971  108966 command_runner.go:130] >     {
	I0610 14:19:29.945981  108966 command_runner.go:130] >       "id": "b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee",
	I0610 14:19:29.945991  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.945997  108966 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.2"
	I0610 14:19:29.946003  108966 command_runner.go:130] >       ],
	I0610 14:19:29.946009  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.946021  108966 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f",
	I0610 14:19:29.946036  108966 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:931b8fa2393b7e2a926afbfd24784153760b999ddbf2059f2cb652510ecdef83"
	I0610 14:19:29.946045  108966 command_runner.go:130] >       ],
	I0610 14:19:29.946053  108966 command_runner.go:130] >       "size": "72709527",
	I0610 14:19:29.946062  108966 command_runner.go:130] >       "uid": null,
	I0610 14:19:29.946068  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.946075  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.946082  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.946086  108966 command_runner.go:130] >     },
	I0610 14:19:29.946089  108966 command_runner.go:130] >     {
	I0610 14:19:29.946099  108966 command_runner.go:130] >       "id": "89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0",
	I0610 14:19:29.946110  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.946118  108966 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.2"
	I0610 14:19:29.946127  108966 command_runner.go:130] >       ],
	I0610 14:19:29.946133  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.946220  108966 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177",
	I0610 14:19:29.946239  108966 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f8be7505892d1671a15afa3ac6c3b31e50da87dd59a4745e30a5b3f9f584ee6e"
	I0610 14:19:29.946245  108966 command_runner.go:130] >       ],
	I0610 14:19:29.946252  108966 command_runner.go:130] >       "size": "59802924",
	I0610 14:19:29.946262  108966 command_runner.go:130] >       "uid": {
	I0610 14:19:29.946269  108966 command_runner.go:130] >         "value": "0"
	I0610 14:19:29.946279  108966 command_runner.go:130] >       },
	I0610 14:19:29.946286  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.946295  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.946302  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.946308  108966 command_runner.go:130] >     },
	I0610 14:19:29.946317  108966 command_runner.go:130] >     {
	I0610 14:19:29.946327  108966 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0610 14:19:29.946336  108966 command_runner.go:130] >       "repoTags": [
	I0610 14:19:29.946340  108966 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0610 14:19:29.946344  108966 command_runner.go:130] >       ],
	I0610 14:19:29.946350  108966 command_runner.go:130] >       "repoDigests": [
	I0610 14:19:29.946376  108966 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0610 14:19:29.946387  108966 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0610 14:19:29.946393  108966 command_runner.go:130] >       ],
	I0610 14:19:29.946400  108966 command_runner.go:130] >       "size": "750414",
	I0610 14:19:29.946406  108966 command_runner.go:130] >       "uid": {
	I0610 14:19:29.946422  108966 command_runner.go:130] >         "value": "65535"
	I0610 14:19:29.946426  108966 command_runner.go:130] >       },
	I0610 14:19:29.946435  108966 command_runner.go:130] >       "username": "",
	I0610 14:19:29.946441  108966 command_runner.go:130] >       "spec": null,
	I0610 14:19:29.946452  108966 command_runner.go:130] >       "pinned": false
	I0610 14:19:29.946457  108966 command_runner.go:130] >     }
	I0610 14:19:29.946463  108966 command_runner.go:130] >   ]
	I0610 14:19:29.946468  108966 command_runner.go:130] > }
	I0610 14:19:29.947060  108966 crio.go:496] all images are preloaded for cri-o runtime.
	I0610 14:19:29.947074  108966 cache_images.go:84] Images are preloaded, skipping loading
	I0610 14:19:29.947122  108966 ssh_runner.go:195] Run: crio config
	I0610 14:19:29.982174  108966 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0610 14:19:29.982213  108966 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0610 14:19:29.982224  108966 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0610 14:19:29.982230  108966 command_runner.go:130] > #
	I0610 14:19:29.982241  108966 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0610 14:19:29.982251  108966 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0610 14:19:29.982261  108966 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0610 14:19:29.982277  108966 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0610 14:19:29.982287  108966 command_runner.go:130] > # reload'.
	I0610 14:19:29.982297  108966 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0610 14:19:29.982307  108966 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0610 14:19:29.982316  108966 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0610 14:19:29.982325  108966 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0610 14:19:29.982331  108966 command_runner.go:130] > [crio]
	I0610 14:19:29.982340  108966 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0610 14:19:29.982348  108966 command_runner.go:130] > # containers images, in this directory.
	I0610 14:19:29.982362  108966 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0610 14:19:29.982372  108966 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0610 14:19:29.982380  108966 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0610 14:19:29.982394  108966 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0610 14:19:29.982413  108966 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0610 14:19:29.982423  108966 command_runner.go:130] > # storage_driver = "vfs"
	I0610 14:19:29.982432  108966 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0610 14:19:29.982446  108966 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0610 14:19:29.982457  108966 command_runner.go:130] > # storage_option = [
	I0610 14:19:29.982462  108966 command_runner.go:130] > # ]
	I0610 14:19:29.982476  108966 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0610 14:19:29.982486  108966 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0610 14:19:29.982498  108966 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0610 14:19:29.982510  108966 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0610 14:19:29.982519  108966 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0610 14:19:29.982533  108966 command_runner.go:130] > # always happen on a node reboot
	I0610 14:19:29.982543  108966 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0610 14:19:29.982551  108966 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0610 14:19:29.982565  108966 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0610 14:19:29.982577  108966 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0610 14:19:29.982589  108966 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0610 14:19:29.982602  108966 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0610 14:19:29.982621  108966 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0610 14:19:29.982631  108966 command_runner.go:130] > # internal_wipe = true
	I0610 14:19:29.982640  108966 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0610 14:19:29.982654  108966 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0610 14:19:29.982665  108966 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0610 14:19:29.982676  108966 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0610 14:19:29.982696  108966 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0610 14:19:29.982706  108966 command_runner.go:130] > [crio.api]
	I0610 14:19:29.982715  108966 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0610 14:19:29.982726  108966 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0610 14:19:29.982733  108966 command_runner.go:130] > # IP address on which the stream server will listen.
	I0610 14:19:29.982740  108966 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0610 14:19:29.982754  108966 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0610 14:19:29.982766  108966 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0610 14:19:29.982772  108966 command_runner.go:130] > # stream_port = "0"
	I0610 14:19:29.982781  108966 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0610 14:19:29.982793  108966 command_runner.go:130] > # stream_enable_tls = false
	I0610 14:19:29.982803  108966 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0610 14:19:29.982810  108966 command_runner.go:130] > # stream_idle_timeout = ""
	I0610 14:19:29.982821  108966 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0610 14:19:29.982834  108966 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0610 14:19:29.982843  108966 command_runner.go:130] > # minutes.
	I0610 14:19:29.982852  108966 command_runner.go:130] > # stream_tls_cert = ""
	I0610 14:19:29.982862  108966 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0610 14:19:29.982876  108966 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0610 14:19:29.982914  108966 command_runner.go:130] > # stream_tls_key = ""
	I0610 14:19:29.982930  108966 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0610 14:19:29.982940  108966 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0610 14:19:29.982949  108966 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0610 14:19:29.982958  108966 command_runner.go:130] > # stream_tls_ca = ""
	I0610 14:19:29.982971  108966 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0610 14:19:29.982979  108966 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0610 14:19:29.982994  108966 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0610 14:19:29.983005  108966 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0610 14:19:29.983030  108966 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0610 14:19:29.983044  108966 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0610 14:19:29.983050  108966 command_runner.go:130] > [crio.runtime]
	I0610 14:19:29.983060  108966 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0610 14:19:29.983074  108966 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0610 14:19:29.983084  108966 command_runner.go:130] > # "nofile=1024:2048"
	I0610 14:19:29.983094  108966 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0610 14:19:29.983104  108966 command_runner.go:130] > # default_ulimits = [
	I0610 14:19:29.983109  108966 command_runner.go:130] > # ]
	I0610 14:19:29.983123  108966 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0610 14:19:29.983136  108966 command_runner.go:130] > # no_pivot = false
	I0610 14:19:29.983149  108966 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0610 14:19:29.983162  108966 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0610 14:19:29.983170  108966 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0610 14:19:29.983182  108966 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0610 14:19:29.983193  108966 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0610 14:19:29.983204  108966 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0610 14:19:29.983214  108966 command_runner.go:130] > # conmon = ""
	I0610 14:19:29.983221  108966 command_runner.go:130] > # Cgroup setting for conmon
	I0610 14:19:29.983236  108966 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0610 14:19:29.983247  108966 command_runner.go:130] > conmon_cgroup = "pod"
	I0610 14:19:29.983257  108966 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0610 14:19:29.983269  108966 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0610 14:19:29.983282  108966 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0610 14:19:29.983292  108966 command_runner.go:130] > # conmon_env = [
	I0610 14:19:29.983299  108966 command_runner.go:130] > # ]
	I0610 14:19:29.983311  108966 command_runner.go:130] > # Additional environment variables to set for all the
	I0610 14:19:29.983320  108966 command_runner.go:130] > # containers. These are overridden if set in the
	I0610 14:19:29.983333  108966 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0610 14:19:29.983343  108966 command_runner.go:130] > # default_env = [
	I0610 14:19:29.983349  108966 command_runner.go:130] > # ]
	I0610 14:19:29.983359  108966 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0610 14:19:29.983369  108966 command_runner.go:130] > # selinux = false
	I0610 14:19:29.983380  108966 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0610 14:19:29.983393  108966 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0610 14:19:29.983413  108966 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0610 14:19:29.983428  108966 command_runner.go:130] > # seccomp_profile = ""
	I0610 14:19:29.983436  108966 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0610 14:19:29.983470  108966 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0610 14:19:29.983484  108966 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0610 14:19:29.983492  108966 command_runner.go:130] > # which might increase security.
	I0610 14:19:29.983503  108966 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0610 14:19:29.983513  108966 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0610 14:19:29.983526  108966 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0610 14:19:29.983538  108966 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0610 14:19:29.983550  108966 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0610 14:19:29.983561  108966 command_runner.go:130] > # This option supports live configuration reload.
	I0610 14:19:29.983572  108966 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0610 14:19:29.983586  108966 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0610 14:19:29.983597  108966 command_runner.go:130] > # the cgroup blockio controller.
	I0610 14:19:29.983604  108966 command_runner.go:130] > # blockio_config_file = ""
	I0610 14:19:29.983618  108966 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0610 14:19:29.983627  108966 command_runner.go:130] > # irqbalance daemon.
	I0610 14:19:29.983636  108966 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0610 14:19:29.983646  108966 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0610 14:19:29.983656  108966 command_runner.go:130] > # This option supports live configuration reload.
	I0610 14:19:29.983665  108966 command_runner.go:130] > # rdt_config_file = ""
	I0610 14:19:29.983677  108966 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0610 14:19:29.983686  108966 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0610 14:19:29.983700  108966 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0610 14:19:29.983710  108966 command_runner.go:130] > # separate_pull_cgroup = ""
	I0610 14:19:29.983722  108966 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0610 14:19:29.983732  108966 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0610 14:19:29.983741  108966 command_runner.go:130] > # will be added.
	I0610 14:19:29.983749  108966 command_runner.go:130] > # default_capabilities = [
	I0610 14:19:29.983758  108966 command_runner.go:130] > # 	"CHOWN",
	I0610 14:19:29.983765  108966 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0610 14:19:29.983774  108966 command_runner.go:130] > # 	"FSETID",
	I0610 14:19:29.983780  108966 command_runner.go:130] > # 	"FOWNER",
	I0610 14:19:29.983789  108966 command_runner.go:130] > # 	"SETGID",
	I0610 14:19:29.983795  108966 command_runner.go:130] > # 	"SETUID",
	I0610 14:19:29.983804  108966 command_runner.go:130] > # 	"SETPCAP",
	I0610 14:19:29.983811  108966 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0610 14:19:29.983821  108966 command_runner.go:130] > # 	"KILL",
	I0610 14:19:29.983828  108966 command_runner.go:130] > # ]
	I0610 14:19:29.983835  108966 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0610 14:19:29.983884  108966 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0610 14:19:29.983896  108966 command_runner.go:130] > # add_inheritable_capabilities = true
	I0610 14:19:29.983907  108966 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0610 14:19:29.983917  108966 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0610 14:19:29.983925  108966 command_runner.go:130] > # default_sysctls = [
	I0610 14:19:29.983931  108966 command_runner.go:130] > # ]
	I0610 14:19:29.983942  108966 command_runner.go:130] > # List of devices on the host that a
	I0610 14:19:29.983953  108966 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0610 14:19:29.983963  108966 command_runner.go:130] > # allowed_devices = [
	I0610 14:19:29.983969  108966 command_runner.go:130] > # 	"/dev/fuse",
	I0610 14:19:29.983977  108966 command_runner.go:130] > # ]
	I0610 14:19:29.983985  108966 command_runner.go:130] > # List of additional devices. specified as
	I0610 14:19:29.984008  108966 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0610 14:19:29.984015  108966 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0610 14:19:29.984025  108966 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0610 14:19:29.984039  108966 command_runner.go:130] > # additional_devices = [
	I0610 14:19:29.984045  108966 command_runner.go:130] > # ]
	I0610 14:19:29.984057  108966 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0610 14:19:29.984066  108966 command_runner.go:130] > # cdi_spec_dirs = [
	I0610 14:19:29.984073  108966 command_runner.go:130] > # 	"/etc/cdi",
	I0610 14:19:29.984082  108966 command_runner.go:130] > # 	"/var/run/cdi",
	I0610 14:19:29.984088  108966 command_runner.go:130] > # ]
	I0610 14:19:29.984100  108966 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0610 14:19:29.984109  108966 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0610 14:19:29.984114  108966 command_runner.go:130] > # Defaults to false.
	I0610 14:19:29.984122  108966 command_runner.go:130] > # device_ownership_from_security_context = false
	I0610 14:19:29.984136  108966 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0610 14:19:29.984146  108966 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0610 14:19:29.984155  108966 command_runner.go:130] > # hooks_dir = [
	I0610 14:19:29.984168  108966 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0610 14:19:29.984176  108966 command_runner.go:130] > # ]
	I0610 14:19:29.984186  108966 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0610 14:19:29.984198  108966 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0610 14:19:29.984206  108966 command_runner.go:130] > # its default mounts from the following two files:
	I0610 14:19:29.984210  108966 command_runner.go:130] > #
	I0610 14:19:29.984223  108966 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0610 14:19:29.984237  108966 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0610 14:19:29.984250  108966 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0610 14:19:29.984258  108966 command_runner.go:130] > #
	I0610 14:19:29.984268  108966 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0610 14:19:29.984281  108966 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0610 14:19:29.984293  108966 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0610 14:19:29.984302  108966 command_runner.go:130] > #      only add mounts it finds in this file.
	I0610 14:19:29.984305  108966 command_runner.go:130] > #
	I0610 14:19:29.984310  108966 command_runner.go:130] > # default_mounts_file = ""
	I0610 14:19:29.984319  108966 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0610 14:19:29.984334  108966 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0610 14:19:29.984344  108966 command_runner.go:130] > # pids_limit = 0
	I0610 14:19:29.984354  108966 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0610 14:19:29.984366  108966 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0610 14:19:29.984379  108966 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0610 14:19:29.984394  108966 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0610 14:19:29.984402  108966 command_runner.go:130] > # log_size_max = -1
	I0610 14:19:29.984417  108966 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0610 14:19:29.984442  108966 command_runner.go:130] > # log_to_journald = false
	I0610 14:19:29.984453  108966 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0610 14:19:29.984464  108966 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0610 14:19:29.984475  108966 command_runner.go:130] > # Path to directory for container attach sockets.
	I0610 14:19:29.984486  108966 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0610 14:19:29.984498  108966 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0610 14:19:29.984507  108966 command_runner.go:130] > # bind_mount_prefix = ""
	I0610 14:19:29.984513  108966 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0610 14:19:29.984522  108966 command_runner.go:130] > # read_only = false
	I0610 14:19:29.984535  108966 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0610 14:19:29.984548  108966 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0610 14:19:29.984558  108966 command_runner.go:130] > # live configuration reload.
	I0610 14:19:29.984567  108966 command_runner.go:130] > # log_level = "info"
	I0610 14:19:29.984576  108966 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0610 14:19:29.984588  108966 command_runner.go:130] > # This option supports live configuration reload.
	I0610 14:19:29.984596  108966 command_runner.go:130] > # log_filter = ""
	I0610 14:19:29.984602  108966 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0610 14:19:29.984615  108966 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0610 14:19:29.984625  108966 command_runner.go:130] > # separated by comma.
	I0610 14:19:29.984631  108966 command_runner.go:130] > # uid_mappings = ""
	I0610 14:19:29.984644  108966 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0610 14:19:29.984657  108966 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0610 14:19:29.984667  108966 command_runner.go:130] > # separated by comma.
	I0610 14:19:29.984675  108966 command_runner.go:130] > # gid_mappings = ""
	I0610 14:19:29.984687  108966 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0610 14:19:29.984697  108966 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0610 14:19:29.984703  108966 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0610 14:19:29.984713  108966 command_runner.go:130] > # minimum_mappable_uid = -1
	I0610 14:19:29.984750  108966 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0610 14:19:29.984764  108966 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0610 14:19:29.984776  108966 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0610 14:19:29.984786  108966 command_runner.go:130] > # minimum_mappable_gid = -1
	I0610 14:19:29.984795  108966 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0610 14:19:29.984802  108966 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0610 14:19:29.984814  108966 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0610 14:19:29.984824  108966 command_runner.go:130] > # ctr_stop_timeout = 30
	I0610 14:19:29.984834  108966 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0610 14:19:29.984848  108966 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0610 14:19:29.984863  108966 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0610 14:19:29.984874  108966 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0610 14:19:29.984883  108966 command_runner.go:130] > # drop_infra_ctr = true
	I0610 14:19:29.984893  108966 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0610 14:19:29.984902  108966 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0610 14:19:29.984913  108966 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0610 14:19:29.984923  108966 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0610 14:19:29.984934  108966 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0610 14:19:29.984945  108966 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0610 14:19:29.984955  108966 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0610 14:19:29.984968  108966 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0610 14:19:29.984978  108966 command_runner.go:130] > # pinns_path = ""
	I0610 14:19:29.984988  108966 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0610 14:19:29.984998  108966 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0610 14:19:29.985007  108966 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0610 14:19:29.985017  108966 command_runner.go:130] > # default_runtime = "runc"
	I0610 14:19:29.985027  108966 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0610 14:19:29.985042  108966 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0610 14:19:29.985059  108966 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0610 14:19:29.985070  108966 command_runner.go:130] > # creation as a file is not desired either.
	I0610 14:19:29.985082  108966 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0610 14:19:29.985092  108966 command_runner.go:130] > # the hostname is being managed dynamically.
	I0610 14:19:29.985104  108966 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0610 14:19:29.985113  108966 command_runner.go:130] > # ]
	I0610 14:19:29.985123  108966 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0610 14:19:29.985135  108966 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0610 14:19:29.985149  108966 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0610 14:19:29.985161  108966 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0610 14:19:29.985167  108966 command_runner.go:130] > #
	I0610 14:19:29.985171  108966 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0610 14:19:29.985182  108966 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0610 14:19:29.985193  108966 command_runner.go:130] > #  runtime_type = "oci"
	I0610 14:19:29.985201  108966 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0610 14:19:29.985212  108966 command_runner.go:130] > #  privileged_without_host_devices = false
	I0610 14:19:29.985222  108966 command_runner.go:130] > #  allowed_annotations = []
	I0610 14:19:29.985228  108966 command_runner.go:130] > # Where:
	I0610 14:19:29.985240  108966 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0610 14:19:29.985256  108966 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0610 14:19:29.985266  108966 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0610 14:19:29.985280  108966 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0610 14:19:29.985289  108966 command_runner.go:130] > #   in $PATH.
	I0610 14:19:29.985300  108966 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0610 14:19:29.985311  108966 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0610 14:19:29.985324  108966 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0610 14:19:29.985333  108966 command_runner.go:130] > #   state.
	I0610 14:19:29.985343  108966 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0610 14:19:29.985355  108966 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0610 14:19:29.985365  108966 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0610 14:19:29.985375  108966 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0610 14:19:29.985388  108966 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0610 14:19:29.985399  108966 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0610 14:19:29.985414  108966 command_runner.go:130] > #   The currently recognized values are:
	I0610 14:19:29.985427  108966 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0610 14:19:29.985441  108966 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0610 14:19:29.985452  108966 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0610 14:19:29.985461  108966 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0610 14:19:29.985472  108966 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0610 14:19:29.985508  108966 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0610 14:19:29.985521  108966 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0610 14:19:29.985534  108966 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0610 14:19:29.985545  108966 command_runner.go:130] > #   should be moved to the container's cgroup
	I0610 14:19:29.985554  108966 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0610 14:19:29.985559  108966 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0610 14:19:29.985568  108966 command_runner.go:130] > runtime_type = "oci"
	I0610 14:19:29.985578  108966 command_runner.go:130] > runtime_root = "/run/runc"
	I0610 14:19:29.985585  108966 command_runner.go:130] > runtime_config_path = ""
	I0610 14:19:29.985595  108966 command_runner.go:130] > monitor_path = ""
	I0610 14:19:29.985606  108966 command_runner.go:130] > monitor_cgroup = ""
	I0610 14:19:29.985614  108966 command_runner.go:130] > monitor_exec_cgroup = ""
	I0610 14:19:29.985710  108966 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0610 14:19:29.985727  108966 command_runner.go:130] > # running containers
	I0610 14:19:29.985734  108966 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0610 14:19:29.985742  108966 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0610 14:19:29.985756  108966 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0610 14:19:29.985769  108966 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0610 14:19:29.985780  108966 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0610 14:19:29.985790  108966 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0610 14:19:29.985801  108966 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0610 14:19:29.985811  108966 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0610 14:19:29.985817  108966 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0610 14:19:29.985825  108966 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0610 14:19:29.985836  108966 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0610 14:19:29.985849  108966 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0610 14:19:29.985859  108966 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0610 14:19:29.985875  108966 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0610 14:19:29.985891  108966 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0610 14:19:29.985905  108966 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0610 14:19:29.985920  108966 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0610 14:19:29.985932  108966 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0610 14:19:29.985945  108966 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0610 14:19:29.985960  108966 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0610 14:19:29.985971  108966 command_runner.go:130] > # Example:
	I0610 14:19:29.985982  108966 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0610 14:19:29.985994  108966 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0610 14:19:29.986005  108966 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0610 14:19:29.986016  108966 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0610 14:19:29.986025  108966 command_runner.go:130] > # cpuset = 0
	I0610 14:19:29.986034  108966 command_runner.go:130] > # cpushares = "0-1"
	I0610 14:19:29.986044  108966 command_runner.go:130] > # Where:
	I0610 14:19:29.986052  108966 command_runner.go:130] > # The workload name is workload-type.
	I0610 14:19:29.986067  108966 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0610 14:19:29.986079  108966 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0610 14:19:29.986091  108966 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0610 14:19:29.986106  108966 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0610 14:19:29.986116  108966 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0610 14:19:29.986122  108966 command_runner.go:130] > # 
	I0610 14:19:29.986133  108966 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0610 14:19:29.986147  108966 command_runner.go:130] > #
	I0610 14:19:29.986163  108966 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0610 14:19:29.986175  108966 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0610 14:19:29.986192  108966 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0610 14:19:29.986214  108966 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0610 14:19:29.986227  108966 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0610 14:19:29.986236  108966 command_runner.go:130] > [crio.image]
	I0610 14:19:29.986246  108966 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0610 14:19:29.986256  108966 command_runner.go:130] > # default_transport = "docker://"
	I0610 14:19:29.986268  108966 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0610 14:19:29.986279  108966 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0610 14:19:29.986287  108966 command_runner.go:130] > # global_auth_file = ""
	I0610 14:19:29.986299  108966 command_runner.go:130] > # The image used to instantiate infra containers.
	I0610 14:19:29.986310  108966 command_runner.go:130] > # This option supports live configuration reload.
	I0610 14:19:29.986320  108966 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0610 14:19:29.986333  108966 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0610 14:19:29.986346  108966 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0610 14:19:29.986356  108966 command_runner.go:130] > # This option supports live configuration reload.
	I0610 14:19:29.986371  108966 command_runner.go:130] > # pause_image_auth_file = ""
	I0610 14:19:29.986381  108966 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0610 14:19:29.986393  108966 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0610 14:19:29.986411  108966 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0610 14:19:29.986431  108966 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0610 14:19:29.986440  108966 command_runner.go:130] > # pause_command = "/pause"
	I0610 14:19:29.986453  108966 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0610 14:19:29.986465  108966 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0610 14:19:29.986474  108966 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0610 14:19:29.986487  108966 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0610 14:19:29.986499  108966 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0610 14:19:29.986509  108966 command_runner.go:130] > # signature_policy = ""
	I0610 14:19:29.986522  108966 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0610 14:19:29.986534  108966 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0610 14:19:29.986544  108966 command_runner.go:130] > # changing them here.
	I0610 14:19:29.986553  108966 command_runner.go:130] > # insecure_registries = [
	I0610 14:19:29.986560  108966 command_runner.go:130] > # ]
	I0610 14:19:29.986570  108966 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0610 14:19:29.986585  108966 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0610 14:19:29.986599  108966 command_runner.go:130] > # image_volumes = "mkdir"
	I0610 14:19:29.986611  108966 command_runner.go:130] > # Temporary directory to use for storing big files
	I0610 14:19:29.986621  108966 command_runner.go:130] > # big_files_temporary_dir = ""
	I0610 14:19:29.986633  108966 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0610 14:19:29.986642  108966 command_runner.go:130] > # CNI plugins.
	I0610 14:19:29.986652  108966 command_runner.go:130] > [crio.network]
	I0610 14:19:29.986663  108966 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0610 14:19:29.986672  108966 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0610 14:19:29.986682  108966 command_runner.go:130] > # cni_default_network = ""
	I0610 14:19:29.986695  108966 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0610 14:19:29.986706  108966 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0610 14:19:29.986718  108966 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0610 14:19:29.986727  108966 command_runner.go:130] > # plugin_dirs = [
	I0610 14:19:29.986737  108966 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0610 14:19:29.986745  108966 command_runner.go:130] > # ]
	I0610 14:19:29.986755  108966 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0610 14:19:29.986762  108966 command_runner.go:130] > [crio.metrics]
	I0610 14:19:29.986771  108966 command_runner.go:130] > # Globally enable or disable metrics support.
	I0610 14:19:29.986780  108966 command_runner.go:130] > # enable_metrics = false
	I0610 14:19:29.986792  108966 command_runner.go:130] > # Specify enabled metrics collectors.
	I0610 14:19:29.986803  108966 command_runner.go:130] > # Per default all metrics are enabled.
	I0610 14:19:29.986819  108966 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0610 14:19:29.986831  108966 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0610 14:19:29.986843  108966 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0610 14:19:29.986853  108966 command_runner.go:130] > # metrics_collectors = [
	I0610 14:19:29.986861  108966 command_runner.go:130] > # 	"operations",
	I0610 14:19:29.986866  108966 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0610 14:19:29.986876  108966 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0610 14:19:29.986887  108966 command_runner.go:130] > # 	"operations_errors",
	I0610 14:19:29.986894  108966 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0610 14:19:29.986904  108966 command_runner.go:130] > # 	"image_pulls_by_name",
	I0610 14:19:29.986914  108966 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0610 14:19:29.986924  108966 command_runner.go:130] > # 	"image_pulls_failures",
	I0610 14:19:29.986933  108966 command_runner.go:130] > # 	"image_pulls_successes",
	I0610 14:19:29.986943  108966 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0610 14:19:29.986955  108966 command_runner.go:130] > # 	"image_layer_reuse",
	I0610 14:19:29.986962  108966 command_runner.go:130] > # 	"containers_oom_total",
	I0610 14:19:29.986968  108966 command_runner.go:130] > # 	"containers_oom",
	I0610 14:19:29.986979  108966 command_runner.go:130] > # 	"processes_defunct",
	I0610 14:19:29.986989  108966 command_runner.go:130] > # 	"operations_total",
	I0610 14:19:29.986996  108966 command_runner.go:130] > # 	"operations_latency_seconds",
	I0610 14:19:29.987007  108966 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0610 14:19:29.987016  108966 command_runner.go:130] > # 	"operations_errors_total",
	I0610 14:19:29.987026  108966 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0610 14:19:29.987036  108966 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0610 14:19:29.987050  108966 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0610 14:19:29.987058  108966 command_runner.go:130] > # 	"image_pulls_success_total",
	I0610 14:19:29.987065  108966 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0610 14:19:29.987072  108966 command_runner.go:130] > # 	"containers_oom_count_total",
	I0610 14:19:29.987081  108966 command_runner.go:130] > # ]
	I0610 14:19:29.987092  108966 command_runner.go:130] > # The port on which the metrics server will listen.
	I0610 14:19:29.987102  108966 command_runner.go:130] > # metrics_port = 9090
	I0610 14:19:29.987113  108966 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0610 14:19:29.987124  108966 command_runner.go:130] > # metrics_socket = ""
	I0610 14:19:29.987135  108966 command_runner.go:130] > # The certificate for the secure metrics server.
	I0610 14:19:29.987145  108966 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0610 14:19:29.987156  108966 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0610 14:19:29.987167  108966 command_runner.go:130] > # certificate on any modification event.
	I0610 14:19:29.987177  108966 command_runner.go:130] > # metrics_cert = ""
	I0610 14:19:29.987186  108966 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0610 14:19:29.987197  108966 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0610 14:19:29.987206  108966 command_runner.go:130] > # metrics_key = ""
	I0610 14:19:29.987222  108966 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0610 14:19:29.987232  108966 command_runner.go:130] > [crio.tracing]
	I0610 14:19:29.987243  108966 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0610 14:19:29.987250  108966 command_runner.go:130] > # enable_tracing = false
	I0610 14:19:29.987257  108966 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0610 14:19:29.987268  108966 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0610 14:19:29.987279  108966 command_runner.go:130] > # Number of samples to collect per million spans.
	I0610 14:19:29.987291  108966 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0610 14:19:29.987304  108966 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0610 14:19:29.987317  108966 command_runner.go:130] > [crio.stats]
	I0610 14:19:29.987329  108966 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0610 14:19:29.987341  108966 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0610 14:19:29.987348  108966 command_runner.go:130] > # stats_collection_period = 0
	I0610 14:19:29.987379  108966 command_runner.go:130] ! time="2023-06-10 14:19:29.980200286Z" level=info msg="Starting CRI-O, version: 1.24.5, git: b007cb6753d97de6218787b6894b0e3cc1dc8ecd(clean)"
	I0610 14:19:29.987400  108966 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0610 14:19:29.987491  108966 cni.go:84] Creating CNI manager for ""
	I0610 14:19:29.987503  108966 cni.go:136] 1 nodes found, recommending kindnet
	I0610 14:19:29.987514  108966 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 14:19:29.987539  108966 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-007346 NodeName:multinode-007346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 14:19:29.987665  108966 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-007346"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 14:19:29.987724  108966 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-007346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-007346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 14:19:29.987771  108966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 14:19:29.994625  108966 command_runner.go:130] > kubeadm
	I0610 14:19:29.994642  108966 command_runner.go:130] > kubectl
	I0610 14:19:29.994648  108966 command_runner.go:130] > kubelet
	I0610 14:19:29.995187  108966 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 14:19:29.995249  108966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 14:19:30.002271  108966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0610 14:19:30.016594  108966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 14:19:30.030928  108966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0610 14:19:30.045298  108966 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0610 14:19:30.048199  108966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 14:19:30.057040  108966 certs.go:56] Setting up /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346 for IP: 192.168.58.2
	I0610 14:19:30.057068  108966 certs.go:190] acquiring lock for shared ca certs: {Name:mk47e57fed67616a983122d88149f57794c568cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:19:30.057203  108966 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key
	I0610 14:19:30.057253  108966 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key
	I0610 14:19:30.057305  108966 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.key
	I0610 14:19:30.057323  108966 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.crt with IP's: []
	I0610 14:19:30.274393  108966 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.crt ...
	I0610 14:19:30.274421  108966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.crt: {Name:mk0d5eed44785a5e0ff4568d24b99a8053c19f7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:19:30.274578  108966 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.key ...
	I0610 14:19:30.274587  108966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.key: {Name:mk1d0bd30dbc33ac82ac897b897b0ac1e69032f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:19:30.274656  108966 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.key.cee25041
	I0610 14:19:30.274669  108966 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 14:19:30.378526  108966 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.crt.cee25041 ...
	I0610 14:19:30.378551  108966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.crt.cee25041: {Name:mkecddead5629983cc8309a523663a22fde65340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:19:30.378692  108966 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.key.cee25041 ...
	I0610 14:19:30.378708  108966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.key.cee25041: {Name:mk65ce8210f96b2ddad303f3da3b180f87e6c48d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:19:30.378774  108966 certs.go:337] copying /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.crt
	I0610 14:19:30.378839  108966 certs.go:341] copying /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.key
	I0610 14:19:30.378884  108966 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/proxy-client.key
	I0610 14:19:30.378896  108966 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/proxy-client.crt with IP's: []
	I0610 14:19:30.540327  108966 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/proxy-client.crt ...
	I0610 14:19:30.540353  108966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/proxy-client.crt: {Name:mk8def9daba88720ac8041a58bae62f6177d23bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:19:30.540498  108966 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/proxy-client.key ...
	I0610 14:19:30.540507  108966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/proxy-client.key: {Name:mk2504f5cfbb7311fd843116544dcd5741a3bc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:19:30.540573  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 14:19:30.540590  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 14:19:30.540600  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 14:19:30.540612  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 14:19:30.540623  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 14:19:30.540636  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 14:19:30.540646  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 14:19:30.540659  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 14:19:30.540712  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485.pem (1338 bytes)
	W0610 14:19:30.540748  108966 certs.go:433] ignoring /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485_empty.pem, impossibly tiny 0 bytes
	I0610 14:19:30.540759  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 14:19:30.540782  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem (1078 bytes)
	I0610 14:19:30.540803  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem (1123 bytes)
	I0610 14:19:30.540833  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem (1675 bytes)
	I0610 14:19:30.540871  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem (1708 bytes)
	I0610 14:19:30.540895  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem -> /usr/share/ca-certificates/254852.pem
	I0610 14:19:30.540908  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:19:30.540920  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485.pem -> /usr/share/ca-certificates/25485.pem
	I0610 14:19:30.541387  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 14:19:30.561730  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 14:19:30.580853  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 14:19:30.601742  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 14:19:30.621259  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 14:19:30.640573  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 14:19:30.660131  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 14:19:30.679274  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 14:19:30.698679  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem --> /usr/share/ca-certificates/254852.pem (1708 bytes)
	I0610 14:19:30.718857  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 14:19:30.738799  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485.pem --> /usr/share/ca-certificates/25485.pem (1338 bytes)
	I0610 14:19:30.758612  108966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 14:19:30.773133  108966 ssh_runner.go:195] Run: openssl version
	I0610 14:19:30.777496  108966 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0610 14:19:30.777676  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 14:19:30.785288  108966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:19:30.788300  108966 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 14:02 /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:19:30.788331  108966 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 14:02 /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:19:30.788392  108966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:19:30.794050  108966 command_runner.go:130] > b5213941
	I0610 14:19:30.794190  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 14:19:30.801846  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25485.pem && ln -fs /usr/share/ca-certificates/25485.pem /etc/ssl/certs/25485.pem"
	I0610 14:19:30.809488  108966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25485.pem
	I0610 14:19:30.812389  108966 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 14:07 /usr/share/ca-certificates/25485.pem
	I0610 14:19:30.812415  108966 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 14:07 /usr/share/ca-certificates/25485.pem
	I0610 14:19:30.812452  108966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25485.pem
	I0610 14:19:30.818186  108966 command_runner.go:130] > 51391683
	I0610 14:19:30.818265  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25485.pem /etc/ssl/certs/51391683.0"
	I0610 14:19:30.825797  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254852.pem && ln -fs /usr/share/ca-certificates/254852.pem /etc/ssl/certs/254852.pem"
	I0610 14:19:30.833363  108966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254852.pem
	I0610 14:19:30.836328  108966 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 14:07 /usr/share/ca-certificates/254852.pem
	I0610 14:19:30.836370  108966 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 14:07 /usr/share/ca-certificates/254852.pem
	I0610 14:19:30.836401  108966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254852.pem
	I0610 14:19:30.842233  108966 command_runner.go:130] > 3ec20f2e
	I0610 14:19:30.842289  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254852.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 14:19:30.849744  108966 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 14:19:30.852401  108966 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 14:19:30.852442  108966 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 14:19:30.852484  108966 kubeadm.go:404] StartCluster: {Name:multinode-007346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-007346 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:19:30.852572  108966 cri.go:53] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 14:19:30.852626  108966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 14:19:30.883582  108966 cri.go:88] found id: ""
	I0610 14:19:30.883649  108966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 14:19:30.890577  108966 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0610 14:19:30.890603  108966 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0610 14:19:30.890614  108966 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0610 14:19:30.891261  108966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 14:19:30.898662  108966 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0610 14:19:30.898710  108966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 14:19:30.905678  108966 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0610 14:19:30.905703  108966 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0610 14:19:30.905712  108966 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0610 14:19:30.905721  108966 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 14:19:30.905747  108966 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 14:19:30.905775  108966 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0610 14:19:30.946950  108966 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 14:19:30.946979  108966 command_runner.go:130] > [init] Using Kubernetes version: v1.27.2
	I0610 14:19:30.947027  108966 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 14:19:30.947045  108966 command_runner.go:130] > [preflight] Running pre-flight checks
	I0610 14:19:30.979723  108966 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0610 14:19:30.979755  108966 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0610 14:19:30.979827  108966 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1035-gcp
	I0610 14:19:30.979838  108966 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1035-gcp
	I0610 14:19:30.979895  108966 kubeadm.go:322] OS: Linux
	I0610 14:19:30.979905  108966 command_runner.go:130] > OS: Linux
	I0610 14:19:30.979966  108966 kubeadm.go:322] CGROUPS_CPU: enabled
	I0610 14:19:30.979978  108966 command_runner.go:130] > CGROUPS_CPU: enabled
	I0610 14:19:30.980044  108966 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0610 14:19:30.980054  108966 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0610 14:19:30.980115  108966 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0610 14:19:30.980124  108966 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0610 14:19:30.980186  108966 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0610 14:19:30.980195  108966 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0610 14:19:30.980259  108966 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0610 14:19:30.980280  108966 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0610 14:19:30.980374  108966 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0610 14:19:30.980383  108966 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0610 14:19:30.980442  108966 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0610 14:19:30.980456  108966 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0610 14:19:30.980518  108966 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0610 14:19:30.980532  108966 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0610 14:19:30.980592  108966 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0610 14:19:30.980620  108966 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0610 14:19:31.040809  108966 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 14:19:31.040847  108966 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 14:19:31.040953  108966 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 14:19:31.040978  108966 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 14:19:31.041093  108966 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 14:19:31.041106  108966 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 14:19:31.223741  108966 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 14:19:31.225740  108966 out.go:204]   - Generating certificates and keys ...
	I0610 14:19:31.223830  108966 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 14:19:31.225867  108966 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 14:19:31.225878  108966 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0610 14:19:31.225927  108966 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 14:19:31.225941  108966 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0610 14:19:31.412856  108966 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 14:19:31.412880  108966 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 14:19:31.604852  108966 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 14:19:31.604900  108966 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0610 14:19:31.689499  108966 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 14:19:31.689543  108966 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0610 14:19:31.782089  108966 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 14:19:31.782116  108966 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0610 14:19:31.918309  108966 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 14:19:31.918337  108966 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0610 14:19:31.918503  108966 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-007346] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0610 14:19:31.918517  108966 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-007346] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0610 14:19:32.128666  108966 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 14:19:32.128688  108966 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0610 14:19:32.128781  108966 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-007346] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0610 14:19:32.128803  108966 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-007346] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0610 14:19:32.231647  108966 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 14:19:32.231667  108966 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 14:19:32.495717  108966 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 14:19:32.495747  108966 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 14:19:32.620339  108966 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 14:19:32.620370  108966 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0610 14:19:32.620421  108966 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 14:19:32.620426  108966 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 14:19:32.771500  108966 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 14:19:32.771527  108966 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 14:19:32.857055  108966 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 14:19:32.857102  108966 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 14:19:33.145975  108966 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 14:19:33.146017  108966 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 14:19:33.390947  108966 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 14:19:33.390973  108966 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 14:19:33.398488  108966 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 14:19:33.398512  108966 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 14:19:33.399345  108966 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 14:19:33.399357  108966 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 14:19:33.399403  108966 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 14:19:33.399412  108966 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 14:19:33.470330  108966 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 14:19:33.473023  108966 out.go:204]   - Booting up control plane ...
	I0610 14:19:33.470422  108966 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 14:19:33.473121  108966 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 14:19:33.473138  108966 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 14:19:33.474554  108966 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 14:19:33.474570  108966 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 14:19:33.475764  108966 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 14:19:33.475786  108966 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 14:19:33.476620  108966 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 14:19:33.476638  108966 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 14:19:33.478558  108966 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 14:19:33.478590  108966 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 14:19:38.480510  108966 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001880 seconds
	I0610 14:19:38.480540  108966 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.001880 seconds
	I0610 14:19:38.480664  108966 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 14:19:38.480675  108966 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 14:19:38.493300  108966 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 14:19:38.493331  108966 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 14:19:39.012795  108966 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 14:19:39.012838  108966 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0610 14:19:39.013083  108966 kubeadm.go:322] [mark-control-plane] Marking the node multinode-007346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 14:19:39.013109  108966 command_runner.go:130] > [mark-control-plane] Marking the node multinode-007346 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 14:19:39.523494  108966 kubeadm.go:322] [bootstrap-token] Using token: 1k0z92.6fvditkg7y2jcpbv
	I0610 14:19:39.525512  108966 out.go:204]   - Configuring RBAC rules ...
	I0610 14:19:39.523572  108966 command_runner.go:130] > [bootstrap-token] Using token: 1k0z92.6fvditkg7y2jcpbv
	I0610 14:19:39.525665  108966 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 14:19:39.525685  108966 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 14:19:39.528814  108966 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 14:19:39.528834  108966 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 14:19:39.534376  108966 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 14:19:39.534391  108966 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 14:19:39.536792  108966 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 14:19:39.536816  108966 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 14:19:39.539343  108966 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 14:19:39.539363  108966 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 14:19:39.542856  108966 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 14:19:39.542870  108966 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 14:19:39.551057  108966 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 14:19:39.551084  108966 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 14:19:39.745823  108966 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 14:19:39.745858  108966 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0610 14:19:39.964384  108966 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 14:19:39.964407  108966 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0610 14:19:39.965560  108966 kubeadm.go:322] 
	I0610 14:19:39.965649  108966 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 14:19:39.965661  108966 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0610 14:19:39.965664  108966 kubeadm.go:322] 
	I0610 14:19:39.965749  108966 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 14:19:39.965759  108966 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0610 14:19:39.965764  108966 kubeadm.go:322] 
	I0610 14:19:39.965795  108966 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 14:19:39.965805  108966 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0610 14:19:39.965876  108966 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 14:19:39.965887  108966 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 14:19:39.965951  108966 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 14:19:39.965960  108966 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 14:19:39.965965  108966 kubeadm.go:322] 
	I0610 14:19:39.966023  108966 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 14:19:39.966041  108966 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0610 14:19:39.966070  108966 kubeadm.go:322] 
	I0610 14:19:39.966153  108966 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 14:19:39.966168  108966 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 14:19:39.966186  108966 kubeadm.go:322] 
	I0610 14:19:39.966263  108966 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 14:19:39.966282  108966 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0610 14:19:39.966371  108966 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 14:19:39.966382  108966 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 14:19:39.966463  108966 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 14:19:39.966472  108966 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 14:19:39.966477  108966 kubeadm.go:322] 
	I0610 14:19:39.966573  108966 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 14:19:39.966582  108966 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0610 14:19:39.966670  108966 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 14:19:39.966680  108966 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0610 14:19:39.966685  108966 kubeadm.go:322] 
	I0610 14:19:39.966784  108966 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1k0z92.6fvditkg7y2jcpbv \
	I0610 14:19:39.966793  108966 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 1k0z92.6fvditkg7y2jcpbv \
	I0610 14:19:39.966905  108966 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f7c27fba2457aced24afc8e692292ec6bc66665a6c8292c6979f6ce9f519ecd4 \
	I0610 14:19:39.966918  108966 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f7c27fba2457aced24afc8e692292ec6bc66665a6c8292c6979f6ce9f519ecd4 \
	I0610 14:19:39.966944  108966 kubeadm.go:322] 	--control-plane 
	I0610 14:19:39.966954  108966 command_runner.go:130] > 	--control-plane 
	I0610 14:19:39.966960  108966 kubeadm.go:322] 
	I0610 14:19:39.967053  108966 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 14:19:39.967062  108966 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0610 14:19:39.967066  108966 kubeadm.go:322] 
	I0610 14:19:39.967157  108966 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1k0z92.6fvditkg7y2jcpbv \
	I0610 14:19:39.967166  108966 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 1k0z92.6fvditkg7y2jcpbv \
	I0610 14:19:39.967274  108966 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f7c27fba2457aced24afc8e692292ec6bc66665a6c8292c6979f6ce9f519ecd4 
	I0610 14:19:39.967284  108966 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f7c27fba2457aced24afc8e692292ec6bc66665a6c8292c6979f6ce9f519ecd4 
	I0610 14:19:39.969243  108966 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1035-gcp\n", err: exit status 1
	I0610 14:19:39.969263  108966 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1035-gcp\n", err: exit status 1
	I0610 14:19:39.969398  108966 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 14:19:39.969418  108966 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 14:19:39.969611  108966 kubeadm.go:322] W0610 14:19:31.040689    1192 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 14:19:39.969620  108966 command_runner.go:130] ! W0610 14:19:31.040689    1192 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 14:19:39.969826  108966 kubeadm.go:322] W0610 14:19:33.476420    1192 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 14:19:39.969843  108966 command_runner.go:130] ! W0610 14:19:33.476420    1192 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 14:19:39.969866  108966 cni.go:84] Creating CNI manager for ""
	I0610 14:19:39.969883  108966 cni.go:136] 1 nodes found, recommending kindnet
	I0610 14:19:39.971661  108966 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 14:19:39.973053  108966 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 14:19:39.976414  108966 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 14:19:39.976436  108966 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I0610 14:19:39.976446  108966 command_runner.go:130] > Device: 37h/55d	Inode: 802287      Links: 1
	I0610 14:19:39.976455  108966 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 14:19:39.976480  108966 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0610 14:19:39.976491  108966 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0610 14:19:39.976495  108966 command_runner.go:130] > Change: 2023-06-10 14:01:36.496408099 +0000
	I0610 14:19:39.976500  108966 command_runner.go:130] >  Birth: 2023-06-10 14:01:36.472405538 +0000
	I0610 14:19:39.976543  108966 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0610 14:19:39.976552  108966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 14:19:39.991909  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 14:19:40.581960  108966 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0610 14:19:40.586702  108966 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0610 14:19:40.594873  108966 command_runner.go:130] > serviceaccount/kindnet created
	I0610 14:19:40.603892  108966 command_runner.go:130] > daemonset.apps/kindnet created
	I0610 14:19:40.607634  108966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 14:19:40.607705  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:40.607742  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=3273891fc7fc0f39c65075197baa2d52fc489f6f minikube.k8s.io/name=multinode-007346 minikube.k8s.io/updated_at=2023_06_10T14_19_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:40.614817  108966 command_runner.go:130] > -16
	I0610 14:19:40.616275  108966 ops.go:34] apiserver oom_adj: -16
	I0610 14:19:40.684995  108966 command_runner.go:130] > node/multinode-007346 labeled
	I0610 14:19:40.685051  108966 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0610 14:19:40.685146  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:40.746591  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:41.247539  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:41.304096  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:41.747396  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:41.806663  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:42.247639  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:42.307797  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:42.747381  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:42.807631  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:43.247320  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:43.311445  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:43.747046  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:43.804855  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:44.247160  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:44.305259  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:44.747148  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:44.806753  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:45.247482  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:45.305827  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:45.747037  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:45.809386  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:46.247234  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:46.304364  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:46.747345  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:46.805045  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:47.247385  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:47.307569  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:47.747114  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:47.807807  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:48.247768  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:48.307966  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:48.747429  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:48.806772  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:49.247401  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:49.307932  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:49.747371  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:49.807791  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:50.247413  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:50.307269  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:50.747087  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:50.808386  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:51.247615  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:51.310020  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:51.747446  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:51.807085  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:52.247376  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:52.305779  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:52.746878  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:52.807208  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:53.247106  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:53.308225  108966 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0610 14:19:53.747503  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 14:19:53.817850  108966 command_runner.go:130] > NAME      SECRETS   AGE
	I0610 14:19:53.817878  108966 command_runner.go:130] > default   0         0s
	I0610 14:19:53.817914  108966 kubeadm.go:1076] duration metric: took 13.210269645s to wait for elevateKubeSystemPrivileges.
	I0610 14:19:53.817944  108966 kubeadm.go:406] StartCluster complete in 22.965451848s
	I0610 14:19:53.817970  108966 settings.go:142] acquiring lock: {Name:mk5881f609c073bbe2e65c237b3cf267f8761582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:19:53.818047  108966 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:19:53.818908  108966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15074-18675/kubeconfig: {Name:mk5649556a15e88039256d0bd607afdddb4a6ce9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:19:53.819143  108966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 14:19:53.819312  108966 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0610 14:19:53.819391  108966 config.go:182] Loaded profile config "multinode-007346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:19:53.819425  108966 addons.go:66] Setting storage-provisioner=true in profile "multinode-007346"
	I0610 14:19:53.819451  108966 addons.go:228] Setting addon storage-provisioner=true in "multinode-007346"
	I0610 14:19:53.819476  108966 addons.go:66] Setting default-storageclass=true in profile "multinode-007346"
	I0610 14:19:53.819508  108966 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:19:53.819520  108966 host.go:66] Checking if "multinode-007346" exists ...
	I0610 14:19:53.819509  108966 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-007346"
	I0610 14:19:53.819811  108966 kapi.go:59] client config for multinode-007346: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.crt", KeyFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.key", CAFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bb8e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 14:19:53.819942  108966 cli_runner.go:164] Run: docker container inspect multinode-007346 --format={{.State.Status}}
	I0610 14:19:53.819983  108966 cli_runner.go:164] Run: docker container inspect multinode-007346 --format={{.State.Status}}
	I0610 14:19:53.820724  108966 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 14:19:53.820928  108966 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 14:19:53.820943  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:53.820952  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:53.820961  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:53.831251  108966 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 14:19:53.831279  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:53.831290  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:53.831299  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:53.831310  108966 round_trippers.go:580]     Content-Length: 291
	I0610 14:19:53.831325  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:53 GMT
	I0610 14:19:53.831335  108966 round_trippers.go:580]     Audit-Id: 68e7043c-b727-4737-bb5c-9bf9cb148e08
	I0610 14:19:53.831348  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:53.831357  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:53.831392  108966 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a850b2a-5b13-4da4-8ed3-89b9bb9201e5","resourceVersion":"381","creationTimestamp":"2023-06-10T14:19:39Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 14:19:53.831863  108966 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a850b2a-5b13-4da4-8ed3-89b9bb9201e5","resourceVersion":"381","creationTimestamp":"2023-06-10T14:19:39Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 14:19:53.831956  108966 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 14:19:53.831970  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:53.831981  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:53.831991  108966 round_trippers.go:473]     Content-Type: application/json
	I0610 14:19:53.832004  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:53.838077  108966 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:19:53.840444  108966 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 14:19:53.838323  108966 kapi.go:59] client config for multinode-007346: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.crt", KeyFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.key", CAFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bb8e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 14:19:53.842408  108966 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 14:19:53.842428  108966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 14:19:53.842476  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:19:53.842545  108966 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0610 14:19:53.842560  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:53.842570  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:53.842588  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:53.856932  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa Username:docker}
	I0610 14:19:53.864319  108966 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0610 14:19:53.864348  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:53.864359  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:53.864370  108966 round_trippers.go:580]     Content-Length: 109
	I0610 14:19:53.864379  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:53 GMT
	I0610 14:19:53.864390  108966 round_trippers.go:580]     Audit-Id: 20d9f69d-1be0-405f-a281-5e5f36986e7c
	I0610 14:19:53.864401  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:53.864412  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:53.864437  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:53.864463  108966 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"381"},"items":[]}
	I0610 14:19:53.864728  108966 addons.go:228] Setting addon default-storageclass=true in "multinode-007346"
	I0610 14:19:53.864777  108966 host.go:66] Checking if "multinode-007346" exists ...
	I0610 14:19:53.865268  108966 cli_runner.go:164] Run: docker container inspect multinode-007346 --format={{.State.Status}}
	I0610 14:19:53.865472  108966 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0610 14:19:53.865500  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:53.865510  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:53.865518  108966 round_trippers.go:580]     Content-Length: 291
	I0610 14:19:53.865527  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:53 GMT
	I0610 14:19:53.865535  108966 round_trippers.go:580]     Audit-Id: bf8a25c0-0a15-4de9-babb-9ba6f24b826f
	I0610 14:19:53.865543  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:53.865551  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:53.865558  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:53.865581  108966 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a850b2a-5b13-4da4-8ed3-89b9bb9201e5","resourceVersion":"382","creationTimestamp":"2023-06-10T14:19:39Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0610 14:19:53.885330  108966 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 14:19:53.885352  108966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 14:19:53.885402  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:19:53.908161  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa Username:docker}
	I0610 14:19:53.972557  108966 command_runner.go:130] > apiVersion: v1
	I0610 14:19:53.972578  108966 command_runner.go:130] > data:
	I0610 14:19:53.972584  108966 command_runner.go:130] >   Corefile: |
	I0610 14:19:53.972591  108966 command_runner.go:130] >     .:53 {
	I0610 14:19:53.972597  108966 command_runner.go:130] >         errors
	I0610 14:19:53.972605  108966 command_runner.go:130] >         health {
	I0610 14:19:53.972612  108966 command_runner.go:130] >            lameduck 5s
	I0610 14:19:53.972618  108966 command_runner.go:130] >         }
	I0610 14:19:53.972624  108966 command_runner.go:130] >         ready
	I0610 14:19:53.972634  108966 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0610 14:19:53.972641  108966 command_runner.go:130] >            pods insecure
	I0610 14:19:53.972649  108966 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0610 14:19:53.972658  108966 command_runner.go:130] >            ttl 30
	I0610 14:19:53.972664  108966 command_runner.go:130] >         }
	I0610 14:19:53.972677  108966 command_runner.go:130] >         prometheus :9153
	I0610 14:19:53.972685  108966 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0610 14:19:53.972692  108966 command_runner.go:130] >            max_concurrent 1000
	I0610 14:19:53.972701  108966 command_runner.go:130] >         }
	I0610 14:19:53.972709  108966 command_runner.go:130] >         cache 30
	I0610 14:19:53.972718  108966 command_runner.go:130] >         loop
	I0610 14:19:53.972724  108966 command_runner.go:130] >         reload
	I0610 14:19:53.972734  108966 command_runner.go:130] >         loadbalance
	I0610 14:19:53.972740  108966 command_runner.go:130] >     }
	I0610 14:19:53.972747  108966 command_runner.go:130] > kind: ConfigMap
	I0610 14:19:53.972758  108966 command_runner.go:130] > metadata:
	I0610 14:19:53.972771  108966 command_runner.go:130] >   creationTimestamp: "2023-06-10T14:19:39Z"
	I0610 14:19:53.972780  108966 command_runner.go:130] >   name: coredns
	I0610 14:19:53.972787  108966 command_runner.go:130] >   namespace: kube-system
	I0610 14:19:53.972797  108966 command_runner.go:130] >   resourceVersion: "255"
	I0610 14:19:53.972810  108966 command_runner.go:130] >   uid: ab80d207-a682-4d0d-896f-35975214c3ee
	I0610 14:19:53.975236  108966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 14:19:53.978077  108966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 14:19:54.082118  108966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 14:19:54.366584  108966 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 14:19:54.366607  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:54.366619  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:54.366630  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:54.369446  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:19:54.369468  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:54.369478  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:54.369488  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:54.369497  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:54.369506  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:54.369515  108966 round_trippers.go:580]     Content-Length: 291
	I0610 14:19:54.369525  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:54 GMT
	I0610 14:19:54.369534  108966 round_trippers.go:580]     Audit-Id: 575f157a-2e7d-4f1d-8f22-17331ce3d209
	I0610 14:19:54.369556  108966 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a850b2a-5b13-4da4-8ed3-89b9bb9201e5","resourceVersion":"392","creationTimestamp":"2023-06-10T14:19:39Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0610 14:19:54.369653  108966 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-007346" context rescaled to 1 replicas
	I0610 14:19:54.369688  108966 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 14:19:54.373674  108966 out.go:177] * Verifying Kubernetes components...
	I0610 14:19:54.375501  108966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 14:19:54.769082  108966 command_runner.go:130] > configmap/coredns replaced
	I0610 14:19:54.773257  108966 start.go:916] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0610 14:19:54.986262  108966 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0610 14:19:54.986293  108966 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0610 14:19:54.986312  108966 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 14:19:54.986324  108966 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0610 14:19:54.986335  108966 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0610 14:19:54.986343  108966 command_runner.go:130] > pod/storage-provisioner created
	I0610 14:19:54.986369  108966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.008268419s)
	I0610 14:19:54.986410  108966 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0610 14:19:54.988249  108966 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0610 14:19:54.986880  108966 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:19:54.989847  108966 addons.go:499] enable addons completed in 1.170534662s: enabled=[storage-provisioner default-storageclass]
	I0610 14:19:54.990175  108966 kapi.go:59] client config for multinode-007346: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.crt", KeyFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.key", CAFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bb8e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 14:19:54.990554  108966 node_ready.go:35] waiting up to 6m0s for node "multinode-007346" to be "Ready" ...
	I0610 14:19:54.990632  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:19:54.990643  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:54.990653  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:54.990661  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:54.993321  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:19:54.993340  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:54.993350  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:54.993359  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:54.993371  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:54.993383  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:54.993396  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:54 GMT
	I0610 14:19:54.993408  108966 round_trippers.go:580]     Audit-Id: 166a36c9-d429-44cd-a562-2687174ceb23
	I0610 14:19:54.993508  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:19:55.494751  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:19:55.494772  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:55.494780  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:55.494786  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:55.497012  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:19:55.497035  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:55.497045  108966 round_trippers.go:580]     Audit-Id: 1617ac7d-c5da-457e-a0e0-6a49ae116679
	I0610 14:19:55.497052  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:55.497061  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:55.497070  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:55.497080  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:55.497093  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:55 GMT
	I0610 14:19:55.497206  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:19:55.994796  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:19:55.994819  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:55.994828  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:55.994835  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:55.996986  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:19:55.997008  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:55.997018  108966 round_trippers.go:580]     Audit-Id: 1c224915-66c6-4362-bdbf-787a1042fb96
	I0610 14:19:55.997027  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:55.997043  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:55.997052  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:55.997064  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:55.997073  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:55 GMT
	I0610 14:19:55.997171  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:19:56.494911  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:19:56.494931  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:56.494941  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:56.494949  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:56.496993  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:19:56.497015  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:56.497023  108966 round_trippers.go:580]     Audit-Id: cc51c06a-a156-4f27-a631-07b78f3ff11a
	I0610 14:19:56.497029  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:56.497034  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:56.497042  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:56.497048  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:56.497053  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:56 GMT
	I0610 14:19:56.497156  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:19:56.994330  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:19:56.994358  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:56.994366  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:56.994373  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:56.996863  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:19:56.996884  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:56.996894  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:56.996905  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:56.996914  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:56 GMT
	I0610 14:19:56.996925  108966 round_trippers.go:580]     Audit-Id: 58990f00-85dc-4b06-a3e7-9112252fdb28
	I0610 14:19:56.996938  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:56.996950  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:56.997062  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:19:56.997411  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:19:57.494583  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:19:57.494604  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:57.494614  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:57.494622  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:57.496812  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:19:57.496832  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:57.496841  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:57.496849  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:57 GMT
	I0610 14:19:57.496857  108966 round_trippers.go:580]     Audit-Id: 0654197a-4801-460e-ae17-3a2fcc1e87f1
	I0610 14:19:57.496866  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:57.496875  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:57.496883  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:57.497020  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:19:57.994362  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:19:57.994384  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:57.994397  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:57.994407  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:57.998331  108966 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 14:19:57.998355  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:57.998366  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:57.998375  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:57.998387  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:57.998393  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:57 GMT
	I0610 14:19:57.998399  108966 round_trippers.go:580]     Audit-Id: 6f09fa64-7f76-42cb-a883-614f5c0e73fe
	I0610 14:19:57.998407  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:57.998474  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:19:58.495107  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:19:58.495124  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:58.495132  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:58.495138  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:58.497132  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:19:58.497156  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:58.497165  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:58.497175  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:58 GMT
	I0610 14:19:58.497184  108966 round_trippers.go:580]     Audit-Id: c2eb7ffe-f6e5-4333-ad06-54934eecfb76
	I0610 14:19:58.497193  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:58.497202  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:58.497213  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:58.497354  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:19:58.994909  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:19:58.994927  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:58.994935  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:58.994941  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:58.997031  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:19:58.997052  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:58.997061  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:58 GMT
	I0610 14:19:58.997070  108966 round_trippers.go:580]     Audit-Id: f95f896f-f129-4e08-9d05-3842eb639dc5
	I0610 14:19:58.997081  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:58.997091  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:58.997100  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:58.997113  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:58.997217  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:19:58.997535  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:19:59.494847  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:19:59.494873  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:59.494885  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:59.494895  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:59.497067  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:19:59.497087  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:59.497097  108966 round_trippers.go:580]     Audit-Id: d974b3f2-9cd1-41e0-b51e-99f6a5184d70
	I0610 14:19:59.497105  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:59.497115  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:59.497128  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:59.497141  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:59.497151  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:59 GMT
	I0610 14:19:59.497255  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:19:59.994818  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:19:59.994834  108966 round_trippers.go:469] Request Headers:
	I0610 14:19:59.994842  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:19:59.994848  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:19:59.996992  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:19:59.997012  108966 round_trippers.go:577] Response Headers:
	I0610 14:19:59.997022  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:19:59.997030  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:19:59.997040  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:19:59.997053  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:19:59 GMT
	I0610 14:19:59.997065  108966 round_trippers.go:580]     Audit-Id: b924b965-f473-4d43-8ecf-91be3eb86c91
	I0610 14:19:59.997075  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:19:59.997168  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:00.494768  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:00.494794  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:00.494802  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:00.494809  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:00.497100  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:00.497119  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:00.497125  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:00.497131  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:00.497137  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:00 GMT
	I0610 14:20:00.497143  108966 round_trippers.go:580]     Audit-Id: a570b5c1-e631-4ca0-9266-f7d0aa5cb718
	I0610 14:20:00.497152  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:00.497160  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:00.497279  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:00.994877  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:00.994895  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:00.994904  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:00.994911  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:00.997384  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:00.997405  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:00.997414  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:00.997423  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:00.997431  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:00 GMT
	I0610 14:20:00.997439  108966 round_trippers.go:580]     Audit-Id: 5c520d23-d916-4876-be7c-c69b66dc03d2
	I0610 14:20:00.997447  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:00.997455  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:00.997571  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:00.997891  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:20:01.494342  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:01.494362  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:01.494369  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:01.494376  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:01.496651  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:01.496672  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:01.496682  108966 round_trippers.go:580]     Audit-Id: cc8c9f83-db73-4957-a03b-655a1f88f0d9
	I0610 14:20:01.496691  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:01.496704  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:01.496715  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:01.496724  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:01.496737  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:01 GMT
	I0610 14:20:01.496904  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:01.994339  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:01.994358  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:01.994366  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:01.994377  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:01.996539  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:01.996562  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:01.996572  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:01.996580  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:01.996589  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:01.996601  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:01 GMT
	I0610 14:20:01.996610  108966 round_trippers.go:580]     Audit-Id: e67a3941-ebc2-455a-a093-6249fbbfe848
	I0610 14:20:01.996623  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:01.996711  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:02.494353  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:02.494386  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:02.494395  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:02.494402  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:02.496305  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:02.496322  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:02.496328  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:02.496334  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:02.496339  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:02.496344  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:02 GMT
	I0610 14:20:02.496350  108966 round_trippers.go:580]     Audit-Id: a05bc80f-679f-45f2-923f-14cc4e0bd93d
	I0610 14:20:02.496356  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:02.496487  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:02.995145  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:02.995163  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:02.995172  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:02.995182  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:02.997228  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:02.997249  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:02.997256  108966 round_trippers.go:580]     Audit-Id: 1a31bed6-189e-40a4-b7af-ada052bf22a7
	I0610 14:20:02.997261  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:02.997267  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:02.997272  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:02.997280  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:02.997292  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:02 GMT
	I0610 14:20:02.997391  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:03.494987  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:03.495004  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:03.495012  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:03.495018  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:03.497264  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:03.497280  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:03.497286  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:03.497292  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:03.497297  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:03.497311  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:03 GMT
	I0610 14:20:03.497319  108966 round_trippers.go:580]     Audit-Id: aacfbe67-f07f-4979-9be0-9b8876c261b9
	I0610 14:20:03.497331  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:03.497442  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:03.497769  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:20:03.995148  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:03.995165  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:03.995173  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:03.995179  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:03.997552  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:03.997571  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:03.997581  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:03.997589  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:03.997597  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:03.997606  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:03 GMT
	I0610 14:20:03.997614  108966 round_trippers.go:580]     Audit-Id: bd4ccba5-25ae-494d-9141-4e86d4bd0ed7
	I0610 14:20:03.997625  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:03.997787  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:04.494316  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:04.494333  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:04.494341  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:04.494347  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:04.496568  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:04.496584  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:04.496591  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:04 GMT
	I0610 14:20:04.496596  108966 round_trippers.go:580]     Audit-Id: aaa16d79-2eec-47db-83cd-5b754b7a262f
	I0610 14:20:04.496601  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:04.496608  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:04.496617  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:04.496629  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:04.496747  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:04.994314  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:04.994334  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:04.994345  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:04.994351  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:04.996481  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:04.996502  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:04.996511  108966 round_trippers.go:580]     Audit-Id: ff2a09eb-fd1a-4e1c-9112-c07ad9b14ebe
	I0610 14:20:04.996519  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:04.996528  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:04.996540  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:04.996549  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:04.996560  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:04 GMT
	I0610 14:20:04.996647  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:05.494321  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:05.494356  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:05.494376  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:05.494392  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:05.496553  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:05.496569  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:05.496576  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:05.496581  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:05.496587  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:05.496594  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:05 GMT
	I0610 14:20:05.496604  108966 round_trippers.go:580]     Audit-Id: 41631742-2756-4b18-abff-0f63e72dedc8
	I0610 14:20:05.496613  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:05.496719  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:05.994293  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:05.994313  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:05.994321  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:05.994327  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:05.996742  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:05.996766  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:05.996776  108966 round_trippers.go:580]     Audit-Id: 46d3ee11-f8af-413c-b097-62915dd239a4
	I0610 14:20:05.996796  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:05.996805  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:05.996815  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:05.996827  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:05.996840  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:05 GMT
	I0610 14:20:05.996938  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:05.997330  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:20:06.494619  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:06.494638  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:06.494649  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:06.494658  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:06.496893  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:06.496917  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:06.496927  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:06 GMT
	I0610 14:20:06.496936  108966 round_trippers.go:580]     Audit-Id: 084df520-a694-4077-b8dd-e7458a131337
	I0610 14:20:06.496946  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:06.496954  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:06.496961  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:06.496972  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:06.497104  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:06.994345  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:06.994363  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:06.994371  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:06.994377  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:06.996773  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:06.996795  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:06.996805  108966 round_trippers.go:580]     Audit-Id: b6d23a90-9135-4fee-876c-be6599073cdb
	I0610 14:20:06.996815  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:06.996824  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:06.996834  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:06.996861  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:06.996877  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:06 GMT
	I0610 14:20:06.997022  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:07.494559  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:07.494579  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:07.494587  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:07.494593  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:07.496815  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:07.496840  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:07.496850  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:07 GMT
	I0610 14:20:07.496860  108966 round_trippers.go:580]     Audit-Id: 4a60d9a8-38f6-4057-aaac-be55253fe1c1
	I0610 14:20:07.496868  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:07.496881  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:07.496890  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:07.496899  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:07.497084  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:07.994371  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:07.994390  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:07.994398  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:07.994409  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:07.996640  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:07.996658  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:07.996664  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:07.996670  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:07 GMT
	I0610 14:20:07.996675  108966 round_trippers.go:580]     Audit-Id: 3062685b-0629-436f-8d83-271f49c815c5
	I0610 14:20:07.996680  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:07.996696  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:07.996702  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:07.996809  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:08.494328  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:08.494347  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:08.494356  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:08.494362  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:08.496625  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:08.496643  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:08.496649  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:08.496655  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:08.496660  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:08 GMT
	I0610 14:20:08.496665  108966 round_trippers.go:580]     Audit-Id: d7ece20e-53ac-4089-b6fb-1b95ad8c8d3f
	I0610 14:20:08.496671  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:08.496679  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:08.496847  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:08.497147  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:20:08.994355  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:08.994374  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:08.994382  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:08.994388  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:08.996532  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:08.996549  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:08.996555  108966 round_trippers.go:580]     Audit-Id: e4e4d7af-ec50-4e5b-bb76-300fcfd37071
	I0610 14:20:08.996561  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:08.996566  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:08.996571  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:08.996576  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:08.996582  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:08 GMT
	I0610 14:20:08.996664  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:09.494303  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:09.494332  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:09.494340  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:09.494349  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:09.496847  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:09.496869  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:09.496879  108966 round_trippers.go:580]     Audit-Id: 706736eb-6237-4662-8545-42acc9f6a44c
	I0610 14:20:09.496887  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:09.496895  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:09.496910  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:09.496922  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:09.496930  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:09 GMT
	I0610 14:20:09.497156  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:09.994758  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:09.994777  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:09.994785  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:09.994798  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:09.997012  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:09.997037  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:09.997047  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:09.997057  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:09.997066  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:09 GMT
	I0610 14:20:09.997074  108966 round_trippers.go:580]     Audit-Id: c57678a5-0ecb-4583-9151-97c203d278ca
	I0610 14:20:09.997087  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:09.997099  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:09.997234  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:10.494815  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:10.494840  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:10.494852  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:10.494862  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:10.497085  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:10.497105  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:10.497112  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:10.497119  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:10.497128  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:10 GMT
	I0610 14:20:10.497138  108966 round_trippers.go:580]     Audit-Id: 4122957b-0469-426e-a54e-47502882599c
	I0610 14:20:10.497151  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:10.497161  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:10.497350  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:10.497667  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:20:10.994891  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:10.994909  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:10.994917  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:10.994924  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:10.997001  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:10.997017  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:10.997024  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:10.997029  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:10.997034  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:10.997040  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:10.997045  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:10 GMT
	I0610 14:20:10.997052  108966 round_trippers.go:580]     Audit-Id: a00364c4-d046-4d2a-a34c-e36c69745d66
	I0610 14:20:10.997187  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:11.494188  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:11.494225  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:11.494237  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:11.494245  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:11.496542  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:11.496562  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:11.496568  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:11.496575  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:11 GMT
	I0610 14:20:11.496583  108966 round_trippers.go:580]     Audit-Id: 18bd19e9-086e-43fb-825c-0ea91814c9f1
	I0610 14:20:11.496593  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:11.496607  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:11.496616  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:11.496717  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:11.994316  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:11.994335  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:11.994343  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:11.994349  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:11.996521  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:11.996539  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:11.996546  108966 round_trippers.go:580]     Audit-Id: 0eafcfa1-c3c2-4432-a8b3-b0827b8b503b
	I0610 14:20:11.996553  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:11.996563  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:11.996572  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:11.996582  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:11.996595  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:11 GMT
	I0610 14:20:11.996727  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:12.494320  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:12.494338  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:12.494346  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:12.494352  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:12.496492  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:12.496515  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:12.496524  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:12.496530  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:12.496535  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:12.496541  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:12.496547  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:12 GMT
	I0610 14:20:12.496556  108966 round_trippers.go:580]     Audit-Id: 2741717d-07db-4a30-b571-446999983b0b
	I0610 14:20:12.496700  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:12.994232  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:12.994253  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:12.994262  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:12.994267  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:12.996688  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:12.996709  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:12.996717  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:12.996726  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:12.996734  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:12 GMT
	I0610 14:20:12.996743  108966 round_trippers.go:580]     Audit-Id: 54e64f94-d060-45e2-80a2-4b3a6b9b57d8
	I0610 14:20:12.996755  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:12.996765  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:12.996969  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:12.997277  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:20:13.494326  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:13.494344  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:13.494352  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:13.494358  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:13.496546  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:13.496564  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:13.496571  108966 round_trippers.go:580]     Audit-Id: 50621ba3-872e-4cde-a1d7-99af99f7791f
	I0610 14:20:13.496576  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:13.496581  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:13.496587  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:13.496592  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:13.496597  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:13 GMT
	I0610 14:20:13.496702  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:13.994491  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:13.994509  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:13.994517  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:13.994523  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:13.996632  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:13.996650  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:13.996657  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:13.996663  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:13.996670  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:13 GMT
	I0610 14:20:13.996677  108966 round_trippers.go:580]     Audit-Id: 89ccb733-3dce-46a6-97c4-421561816482
	I0610 14:20:13.996685  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:13.996696  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:13.996861  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:14.494348  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:14.494366  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:14.494374  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:14.494380  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:14.496595  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:14.496611  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:14.496619  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:14.496625  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:14.496630  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:14.496636  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:14 GMT
	I0610 14:20:14.496642  108966 round_trippers.go:580]     Audit-Id: c3732092-c871-4e2f-b385-71777e18eed1
	I0610 14:20:14.496647  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:14.496825  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:14.994319  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:14.994354  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:14.994368  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:14.994377  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:14.996488  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:14.996509  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:14.996519  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:14.996528  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:14 GMT
	I0610 14:20:14.996539  108966 round_trippers.go:580]     Audit-Id: 8e655d48-c7fe-464c-8a3f-d09834302e2e
	I0610 14:20:14.996551  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:14.996564  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:14.996594  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:14.996708  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:15.494235  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:15.494256  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:15.494270  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:15.494279  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:15.496466  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:15.496483  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:15.496490  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:15.496496  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:15.496502  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:15.496509  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:15.496518  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:15 GMT
	I0610 14:20:15.496526  108966 round_trippers.go:580]     Audit-Id: c6f04a5d-d2bc-43e2-b9a1-3e8dbd59fbeb
	I0610 14:20:15.496651  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:15.496984  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:20:15.994182  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:15.994211  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:15.994219  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:15.994225  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:15.996449  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:15.996468  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:15.996478  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:15.996486  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:15.996494  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:15.996502  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:15 GMT
	I0610 14:20:15.996511  108966 round_trippers.go:580]     Audit-Id: 00204d7c-2e3a-46fe-8769-82e66d438b94
	I0610 14:20:15.996521  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:15.996653  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:16.494320  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:16.494340  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:16.494363  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:16.494371  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:16.496569  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:16.496590  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:16.496597  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:16.496603  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:16.496609  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:16.496614  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:16 GMT
	I0610 14:20:16.496620  108966 round_trippers.go:580]     Audit-Id: d425d9f5-2eb4-4d0e-b493-7d3570b3f0b4
	I0610 14:20:16.496625  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:16.496773  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:16.994334  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:16.994359  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:16.994369  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:16.994380  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:16.996441  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:16.996463  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:16.996469  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:16.996475  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:16 GMT
	I0610 14:20:16.996482  108966 round_trippers.go:580]     Audit-Id: 98443eb0-51e2-40e1-8936-13304be1727c
	I0610 14:20:16.996490  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:16.996498  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:16.996508  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:16.996619  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:17.494256  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:17.494288  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:17.494309  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:17.494327  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:17.496587  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:17.496606  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:17.496612  108966 round_trippers.go:580]     Audit-Id: e5caa2be-e5f8-4505-a3d6-3abc63a4e71c
	I0610 14:20:17.496618  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:17.496625  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:17.496633  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:17.496642  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:17.496650  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:17 GMT
	I0610 14:20:17.496764  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:17.497052  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:20:17.994327  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:17.994352  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:17.994360  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:17.994366  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:17.996618  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:17.996634  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:17.996641  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:17.996646  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:17 GMT
	I0610 14:20:17.996653  108966 round_trippers.go:580]     Audit-Id: e022e874-3a51-4e48-b4b9-14572f15d608
	I0610 14:20:17.996662  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:17.996673  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:17.996680  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:17.996772  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:18.494359  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:18.494377  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:18.494385  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:18.494391  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:18.496677  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:18.496700  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:18.496710  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:18 GMT
	I0610 14:20:18.496719  108966 round_trippers.go:580]     Audit-Id: 888e9195-3ddf-48ce-a6b6-6bbdcb213c69
	I0610 14:20:18.496729  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:18.496741  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:18.496753  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:18.496765  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:18.496866  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:18.994359  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:18.994380  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:18.994388  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:18.994395  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:18.996672  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:18.996694  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:18.996704  108966 round_trippers.go:580]     Audit-Id: 9717ccea-e842-420b-970d-bee5fcb0737e
	I0610 14:20:18.996713  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:18.996723  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:18.996732  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:18.996745  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:18.996757  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:18 GMT
	I0610 14:20:18.996845  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:19.494369  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:19.494389  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:19.494397  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:19.494403  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:19.496705  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:19.496728  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:19.496738  108966 round_trippers.go:580]     Audit-Id: 1f2630b2-6ab9-4559-a5a3-40bffe890674
	I0610 14:20:19.496747  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:19.496755  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:19.496764  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:19.496770  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:19.496778  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:19 GMT
	I0610 14:20:19.496906  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:19.497232  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:20:19.994343  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:19.994365  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:19.994375  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:19.994385  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:19.996563  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:19.996581  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:19.996588  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:19.996594  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:19.996599  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:19.996604  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:19.996610  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:19 GMT
	I0610 14:20:19.996615  108966 round_trippers.go:580]     Audit-Id: 2d940af1-5b4f-4a20-80db-279899f7b211
	I0610 14:20:19.996714  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:20.494283  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:20.494304  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:20.494312  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:20.494318  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:20.496516  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:20.496537  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:20.496544  108966 round_trippers.go:580]     Audit-Id: be480628-6eff-42b8-af2f-c7ea777a5460
	I0610 14:20:20.496550  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:20.496555  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:20.496560  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:20.496565  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:20.496572  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:20 GMT
	I0610 14:20:20.496747  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:20.994317  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:20.994336  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:20.994344  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:20.994350  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:20.996535  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:20.996559  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:20.996569  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:20.996578  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:20.996587  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:20 GMT
	I0610 14:20:20.996595  108966 round_trippers.go:580]     Audit-Id: 0471256c-0a7a-4db6-b386-a25477aa4ea7
	I0610 14:20:20.996604  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:20.996616  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:20.996741  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:21.494716  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:21.494736  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:21.494744  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:21.494750  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:21.497033  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:21.497055  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:21.497065  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:21.497074  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:21.497080  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:21.497085  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:21 GMT
	I0610 14:20:21.497093  108966 round_trippers.go:580]     Audit-Id: 8fcac37b-cd25-4503-9a88-438e29f28ed3
	I0610 14:20:21.497098  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:21.497232  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:21.497544  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:20:21.994860  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:21.994879  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:21.994887  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:21.994893  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:21.997131  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:21.997155  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:21.997165  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:21.997172  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:21.997188  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:21.997199  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:21.997207  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:21 GMT
	I0610 14:20:21.997217  108966 round_trippers.go:580]     Audit-Id: 346b37a7-62b2-4e48-971c-d1a4e5e05f95
	I0610 14:20:21.997334  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:22.494929  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:22.494950  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:22.494958  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:22.494964  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:22.497257  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:22.497279  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:22.497289  108966 round_trippers.go:580]     Audit-Id: 6ee48715-ece5-4122-b4d5-b60bda70eb76
	I0610 14:20:22.497296  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:22.497304  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:22.497312  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:22.497320  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:22.497328  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:22 GMT
	I0610 14:20:22.497455  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:22.995112  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:22.995131  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:22.995139  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:22.995145  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:22.997238  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:22.997259  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:22.997269  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:22.997278  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:22.997287  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:22.997295  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:22 GMT
	I0610 14:20:22.997305  108966 round_trippers.go:580]     Audit-Id: 71f62921-7eaa-422e-9059-ed2f76739813
	I0610 14:20:22.997312  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:22.997434  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:23.495070  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:23.495091  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:23.495101  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:23.495110  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:23.497300  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:23.497318  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:23.497325  108966 round_trippers.go:580]     Audit-Id: b80c791c-ebba-459b-9514-98ca8dd23d77
	I0610 14:20:23.497331  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:23.497337  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:23.497346  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:23.497352  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:23.497358  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:23 GMT
	I0610 14:20:23.497506  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:23.497808  108966 node_ready.go:58] node "multinode-007346" has status "Ready":"False"
	I0610 14:20:23.994178  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:23.994196  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:23.994228  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:23.994238  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:23.996245  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:23.996265  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:23.996274  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:23.996282  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:23.996290  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:23 GMT
	I0610 14:20:23.996298  108966 round_trippers.go:580]     Audit-Id: 649e17e7-9768-434f-b488-a93333cbcc14
	I0610 14:20:23.996307  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:23.996321  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:23.996457  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:24.495098  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:24.495117  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:24.495126  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:24.495132  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:24.497413  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:24.497429  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:24.497439  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:24.497448  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:24.497457  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:24 GMT
	I0610 14:20:24.497467  108966 round_trippers.go:580]     Audit-Id: c50530dd-3be9-4454-8a59-00c905a1bdd8
	I0610 14:20:24.497475  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:24.497484  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:24.497599  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:24.994242  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:24.994266  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:24.994275  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:24.994281  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:24.996502  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:24.996533  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:24.996543  108966 round_trippers.go:580]     Audit-Id: 3c5fa8cd-fc93-4b51-9ada-aa76d8762eab
	I0610 14:20:24.996553  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:24.996561  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:24.996566  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:24.996577  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:24.996585  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:24 GMT
	I0610 14:20:24.996673  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"356","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0610 14:20:25.494162  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:25.494190  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:25.494213  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:25.494225  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:25.496124  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:25.496142  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:25.496149  108966 round_trippers.go:580]     Audit-Id: faf9bcfb-2793-4237-9e6f-3260bec3b6f9
	I0610 14:20:25.496154  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:25.496159  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:25.496164  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:25.496170  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:25.496175  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:25 GMT
	I0610 14:20:25.496291  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:20:25.496581  108966 node_ready.go:49] node "multinode-007346" has status "Ready":"True"
	I0610 14:20:25.496595  108966 node_ready.go:38] duration metric: took 30.50602391s waiting for node "multinode-007346" to be "Ready" ...
	I0610 14:20:25.496602  108966 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 14:20:25.496646  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0610 14:20:25.496654  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:25.496660  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:25.496666  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:25.499526  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:25.499544  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:25.499551  108966 round_trippers.go:580]     Audit-Id: 21164024-ce69-438c-819d-da2db8a3bf4a
	I0610 14:20:25.499556  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:25.499562  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:25.499567  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:25.499573  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:25.499579  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:25 GMT
	I0610 14:20:25.499957  108966 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"431"},"items":[{"metadata":{"name":"coredns-5d78c9869d-shl5g","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"cd36daa1-b02e-4fe3-a293-11c38f14826b","resourceVersion":"431","creationTimestamp":"2023-06-10T14:19:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"23a81094-3c32-46de-9e16-9015a058b87b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23a81094-3c32-46de-9e16-9015a058b87b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55533 chars]
	I0610 14:20:25.502864  108966 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-shl5g" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:25.502924  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-shl5g
	I0610 14:20:25.502932  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:25.502939  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:25.502945  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:25.504642  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:25.504655  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:25.504664  108966 round_trippers.go:580]     Audit-Id: 74cdad92-362a-4c52-9de0-01622fb7ae5f
	I0610 14:20:25.504670  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:25.504675  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:25.504681  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:25.504690  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:25.504699  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:25 GMT
	I0610 14:20:25.504814  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-shl5g","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"cd36daa1-b02e-4fe3-a293-11c38f14826b","resourceVersion":"431","creationTimestamp":"2023-06-10T14:19:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"23a81094-3c32-46de-9e16-9015a058b87b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23a81094-3c32-46de-9e16-9015a058b87b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0610 14:20:25.505216  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:25.505228  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:25.505235  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:25.505244  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:25.506928  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:25.506946  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:25.506953  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:25.506961  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:25 GMT
	I0610 14:20:25.506969  108966 round_trippers.go:580]     Audit-Id: 8a9eb27d-49c4-4bb7-a46f-d9e041994a0c
	I0610 14:20:25.506978  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:25.506992  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:25.507001  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:25.507572  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:20:26.008225  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-shl5g
	I0610 14:20:26.008243  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.008251  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.008258  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.010776  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:26.010793  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.010800  108966 round_trippers.go:580]     Audit-Id: 591cc606-34d4-4aff-8e5e-07d7a65c7968
	I0610 14:20:26.010806  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.010811  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.010816  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.010822  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.010827  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.011011  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-shl5g","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"cd36daa1-b02e-4fe3-a293-11c38f14826b","resourceVersion":"444","creationTimestamp":"2023-06-10T14:19:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"23a81094-3c32-46de-9e16-9015a058b87b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23a81094-3c32-46de-9e16-9015a058b87b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0610 14:20:26.011442  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:26.011453  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.011460  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.011466  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.013359  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:26.013376  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.013386  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.013394  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.013403  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.013412  108966 round_trippers.go:580]     Audit-Id: 80cda858-4f50-47e9-a291-64f2c635c74d
	I0610 14:20:26.013422  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.013428  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.013556  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:20:26.013845  108966 pod_ready.go:92] pod "coredns-5d78c9869d-shl5g" in "kube-system" namespace has status "Ready":"True"
	I0610 14:20:26.013859  108966 pod_ready.go:81] duration metric: took 510.975555ms waiting for pod "coredns-5d78c9869d-shl5g" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:26.013867  108966 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:26.013905  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-007346
	I0610 14:20:26.013912  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.013919  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.013925  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.015782  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:26.015802  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.015811  108966 round_trippers.go:580]     Audit-Id: ccbc4f53-380d-42e5-948d-2dba7cc80c17
	I0610 14:20:26.015821  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.015830  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.015843  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.015853  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.015866  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.015965  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-007346","namespace":"kube-system","uid":"6420712a-1ac5-4bc1-9126-4744fdf88efb","resourceVersion":"299","creationTimestamp":"2023-06-10T14:19:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"37b6fdeb2b133f7dbaa387ba796c1ab4","kubernetes.io/config.mirror":"37b6fdeb2b133f7dbaa387ba796c1ab4","kubernetes.io/config.seen":"2023-06-10T14:19:39.785675959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0610 14:20:26.016430  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:26.016446  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.016455  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.016462  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.018296  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:26.018314  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.018324  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.018333  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.018342  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.018354  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.018368  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.018380  108966 round_trippers.go:580]     Audit-Id: e0ab3f79-5bf1-4667-bd63-dcd10a5df141
	I0610 14:20:26.018515  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:20:26.018804  108966 pod_ready.go:92] pod "etcd-multinode-007346" in "kube-system" namespace has status "Ready":"True"
	I0610 14:20:26.018818  108966 pod_ready.go:81] duration metric: took 4.945636ms waiting for pod "etcd-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:26.018828  108966 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:26.018867  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-007346
	I0610 14:20:26.018874  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.018881  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.018887  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.020567  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:26.020584  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.020594  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.020602  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.020610  108966 round_trippers.go:580]     Audit-Id: e01132b6-0ad3-4964-bb3e-745118142775
	I0610 14:20:26.020619  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.020632  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.020645  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.020789  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-007346","namespace":"kube-system","uid":"dfa6499c-9c79-4d60-b19a-a9777559448d","resourceVersion":"296","creationTimestamp":"2023-06-10T14:19:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"907f8705ef160d439db593cd98924499","kubernetes.io/config.mirror":"907f8705ef160d439db593cd98924499","kubernetes.io/config.seen":"2023-06-10T14:19:39.785680156Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0610 14:20:26.021294  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:26.021317  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.021327  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.021338  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.022925  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:26.022944  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.022953  108966 round_trippers.go:580]     Audit-Id: a54b3e41-b7f6-4db6-abd7-b486ed27f455
	I0610 14:20:26.022962  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.022969  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.022980  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.022991  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.023003  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.023113  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:20:26.023376  108966 pod_ready.go:92] pod "kube-apiserver-multinode-007346" in "kube-system" namespace has status "Ready":"True"
	I0610 14:20:26.023388  108966 pod_ready.go:81] duration metric: took 4.554169ms waiting for pod "kube-apiserver-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:26.023395  108966 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:26.023433  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-007346
	I0610 14:20:26.023441  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.023449  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.023455  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.025185  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:26.025202  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.025212  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.025220  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.025228  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.025242  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.025252  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.025260  108966 round_trippers.go:580]     Audit-Id: fd64ebba-5655-42ac-9083-c4e16ca1f69f
	I0610 14:20:26.025376  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-007346","namespace":"kube-system","uid":"138c0daf-2ed8-4b72-8bd1-47e4f14030b1","resourceVersion":"293","creationTimestamp":"2023-06-10T14:19:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ca1849216ded5706a7fff56f8b58428f","kubernetes.io/config.mirror":"ca1849216ded5706a7fff56f8b58428f","kubernetes.io/config.seen":"2023-06-10T14:19:39.785681888Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0610 14:20:26.025739  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:26.025751  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.025760  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.025769  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.027228  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:26.027244  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.027252  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.027262  108966 round_trippers.go:580]     Audit-Id: 79a47ebf-5a48-4ea7-9563-85a00c2310a0
	I0610 14:20:26.027271  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.027284  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.027296  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.027307  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.027421  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:20:26.027703  108966 pod_ready.go:92] pod "kube-controller-manager-multinode-007346" in "kube-system" namespace has status "Ready":"True"
	I0610 14:20:26.027716  108966 pod_ready.go:81] duration metric: took 4.315583ms waiting for pod "kube-controller-manager-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:26.027724  108966 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pswh7" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:26.094986  108966 request.go:628] Waited for 67.212356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pswh7
	I0610 14:20:26.095060  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pswh7
	I0610 14:20:26.095070  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.095084  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.095093  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.097199  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:26.097215  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.097222  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.097227  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.097233  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.097241  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.097249  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.097260  108966 round_trippers.go:580]     Audit-Id: ed4e5d80-51a3-4edc-b75c-ffd8e7b3e281
	I0610 14:20:26.097407  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pswh7","generateName":"kube-proxy-","namespace":"kube-system","uid":"a4e7f056-9b22-442e-a512-a591ec2bff2a","resourceVersion":"404","creationTimestamp":"2023-06-10T14:19:53Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"551ccd1d-3af1-41a9-ad14-2ce1135d55c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"551ccd1d-3af1-41a9-ad14-2ce1135d55c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5508 chars]
	I0610 14:20:26.294834  108966 request.go:628] Waited for 196.858231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:26.294878  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:26.294883  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.294890  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.294896  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.297096  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:26.297119  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.297135  108966 round_trippers.go:580]     Audit-Id: 471fca80-c419-4c1f-a59b-e62282d2bc0b
	I0610 14:20:26.297144  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.297152  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.297160  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.297168  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.297181  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.297321  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:20:26.297667  108966 pod_ready.go:92] pod "kube-proxy-pswh7" in "kube-system" namespace has status "Ready":"True"
	I0610 14:20:26.297683  108966 pod_ready.go:81] duration metric: took 269.953619ms waiting for pod "kube-proxy-pswh7" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:26.297691  108966 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:26.495094  108966 request.go:628] Waited for 197.342655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-007346
	I0610 14:20:26.495149  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-007346
	I0610 14:20:26.495153  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.495160  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.495167  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.497382  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:26.497403  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.497414  108966 round_trippers.go:580]     Audit-Id: d04a78a3-46ce-4dff-a016-2b7bf5659bb4
	I0610 14:20:26.497426  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.497434  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.497442  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.497455  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.497464  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.497608  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-007346","namespace":"kube-system","uid":"572e869a-7b30-452e-9389-24f81d604d9f","resourceVersion":"294","creationTimestamp":"2023-06-10T14:19:39Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b68a00a0437cfef17ee6606fa6c3c05f","kubernetes.io/config.mirror":"b68a00a0437cfef17ee6606fa6c3c05f","kubernetes.io/config.seen":"2023-06-10T14:19:39.785683331Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0610 14:20:26.694919  108966 request.go:628] Waited for 196.946602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:26.694979  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:20:26.694983  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.694991  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.695001  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.697313  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:26.697333  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.697342  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.697350  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.697358  108966 round_trippers.go:580]     Audit-Id: b4e504fd-7780-4e87-a475-b4edc6a55807
	I0610 14:20:26.697365  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.697374  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.697394  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.697513  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:20:26.697822  108966 pod_ready.go:92] pod "kube-scheduler-multinode-007346" in "kube-system" namespace has status "Ready":"True"
	I0610 14:20:26.697835  108966 pod_ready.go:81] duration metric: took 400.139388ms waiting for pod "kube-scheduler-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:20:26.697844  108966 pod_ready.go:38] duration metric: took 1.20123436s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 14:20:26.697859  108966 api_server.go:52] waiting for apiserver process to appear ...
	I0610 14:20:26.697909  108966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 14:20:26.708123  108966 command_runner.go:130] > 1441
	I0610 14:20:26.708845  108966 api_server.go:72] duration metric: took 32.339126297s to wait for apiserver process to appear ...
	I0610 14:20:26.708860  108966 api_server.go:88] waiting for apiserver healthz status ...
	I0610 14:20:26.708872  108966 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0610 14:20:26.713051  108966 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0610 14:20:26.713113  108966 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0610 14:20:26.713123  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.713136  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.713149  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.714035  108966 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 14:20:26.714049  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.714058  108966 round_trippers.go:580]     Audit-Id: 2ee8e042-a082-4b9a-a6f7-4effaa0576ba
	I0610 14:20:26.714066  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.714077  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.714087  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.714100  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.714113  108966 round_trippers.go:580]     Content-Length: 263
	I0610 14:20:26.714126  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.714151  108966 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.2",
	  "gitCommit": "7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647",
	  "gitTreeState": "clean",
	  "buildDate": "2023-05-17T14:13:28Z",
	  "goVersion": "go1.20.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0610 14:20:26.714262  108966 api_server.go:141] control plane version: v1.27.2
	I0610 14:20:26.714279  108966 api_server.go:131] duration metric: took 5.413922ms to wait for apiserver health ...
	I0610 14:20:26.714288  108966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 14:20:26.894679  108966 request.go:628] Waited for 180.330648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0610 14:20:26.894776  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0610 14:20:26.894785  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:26.894793  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:26.894799  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:26.897796  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:26.897819  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:26.897830  108966 round_trippers.go:580]     Audit-Id: 50b83bc2-fbde-4ad7-8c21-5f1dff900430
	I0610 14:20:26.897839  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:26.897848  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:26.897857  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:26.897865  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:26.897875  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:26 GMT
	I0610 14:20:26.898409  108966 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-5d78c9869d-shl5g","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"cd36daa1-b02e-4fe3-a293-11c38f14826b","resourceVersion":"444","creationTimestamp":"2023-06-10T14:19:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"23a81094-3c32-46de-9e16-9015a058b87b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23a81094-3c32-46de-9e16-9015a058b87b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I0610 14:20:26.900107  108966 system_pods.go:59] 8 kube-system pods found
	I0610 14:20:26.900127  108966 system_pods.go:61] "coredns-5d78c9869d-shl5g" [cd36daa1-b02e-4fe3-a293-11c38f14826b] Running
	I0610 14:20:26.900132  108966 system_pods.go:61] "etcd-multinode-007346" [6420712a-1ac5-4bc1-9126-4744fdf88efb] Running
	I0610 14:20:26.900135  108966 system_pods.go:61] "kindnet-tsnlt" [79e2addf-dc39-401b-a53a-a31493f50015] Running
	I0610 14:20:26.900139  108966 system_pods.go:61] "kube-apiserver-multinode-007346" [dfa6499c-9c79-4d60-b19a-a9777559448d] Running
	I0610 14:20:26.900144  108966 system_pods.go:61] "kube-controller-manager-multinode-007346" [138c0daf-2ed8-4b72-8bd1-47e4f14030b1] Running
	I0610 14:20:26.900149  108966 system_pods.go:61] "kube-proxy-pswh7" [a4e7f056-9b22-442e-a512-a591ec2bff2a] Running
	I0610 14:20:26.900152  108966 system_pods.go:61] "kube-scheduler-multinode-007346" [572e869a-7b30-452e-9389-24f81d604d9f] Running
	I0610 14:20:26.900157  108966 system_pods.go:61] "storage-provisioner" [0a8cc618-91de-4c43-9ee7-b1e75d4e44bc] Running
	I0610 14:20:26.900161  108966 system_pods.go:74] duration metric: took 185.86944ms to wait for pod list to return data ...
	I0610 14:20:26.900167  108966 default_sa.go:34] waiting for default service account to be created ...
	I0610 14:20:27.094565  108966 request.go:628] Waited for 194.335567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0610 14:20:27.094628  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0610 14:20:27.094642  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:27.094653  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:27.094669  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:27.096825  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:27.096841  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:27.096848  108966 round_trippers.go:580]     Audit-Id: 4cb111b7-c644-4dd0-b079-cf6fc6426373
	I0610 14:20:27.096854  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:27.096859  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:27.096868  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:27.096890  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:27.096905  108966 round_trippers.go:580]     Content-Length: 261
	I0610 14:20:27.096913  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:27 GMT
	I0610 14:20:27.096937  108966 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"9aae070d-8fac-474a-a1f6-26326274129a","resourceVersion":"343","creationTimestamp":"2023-06-10T14:19:53Z"}}]}
	I0610 14:20:27.097126  108966 default_sa.go:45] found service account: "default"
	I0610 14:20:27.097141  108966 default_sa.go:55] duration metric: took 196.967528ms for default service account to be created ...
	I0610 14:20:27.097150  108966 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 14:20:27.294587  108966 request.go:628] Waited for 197.369835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0610 14:20:27.294643  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0610 14:20:27.294650  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:27.294661  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:27.294669  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:27.297618  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:27.297647  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:27.297661  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:27.297670  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:27 GMT
	I0610 14:20:27.297685  108966 round_trippers.go:580]     Audit-Id: f7b8d43a-902c-4519-b885-acb3c9182161
	I0610 14:20:27.297694  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:27.297719  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:27.297736  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:27.298157  108966 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-5d78c9869d-shl5g","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"cd36daa1-b02e-4fe3-a293-11c38f14826b","resourceVersion":"444","creationTimestamp":"2023-06-10T14:19:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"23a81094-3c32-46de-9e16-9015a058b87b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23a81094-3c32-46de-9e16-9015a058b87b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I0610 14:20:27.299890  108966 system_pods.go:86] 8 kube-system pods found
	I0610 14:20:27.299914  108966 system_pods.go:89] "coredns-5d78c9869d-shl5g" [cd36daa1-b02e-4fe3-a293-11c38f14826b] Running
	I0610 14:20:27.299922  108966 system_pods.go:89] "etcd-multinode-007346" [6420712a-1ac5-4bc1-9126-4744fdf88efb] Running
	I0610 14:20:27.299929  108966 system_pods.go:89] "kindnet-tsnlt" [79e2addf-dc39-401b-a53a-a31493f50015] Running
	I0610 14:20:27.299935  108966 system_pods.go:89] "kube-apiserver-multinode-007346" [dfa6499c-9c79-4d60-b19a-a9777559448d] Running
	I0610 14:20:27.299943  108966 system_pods.go:89] "kube-controller-manager-multinode-007346" [138c0daf-2ed8-4b72-8bd1-47e4f14030b1] Running
	I0610 14:20:27.299951  108966 system_pods.go:89] "kube-proxy-pswh7" [a4e7f056-9b22-442e-a512-a591ec2bff2a] Running
	I0610 14:20:27.299959  108966 system_pods.go:89] "kube-scheduler-multinode-007346" [572e869a-7b30-452e-9389-24f81d604d9f] Running
	I0610 14:20:27.299970  108966 system_pods.go:89] "storage-provisioner" [0a8cc618-91de-4c43-9ee7-b1e75d4e44bc] Running
	I0610 14:20:27.299980  108966 system_pods.go:126] duration metric: took 202.821538ms to wait for k8s-apps to be running ...
	I0610 14:20:27.299994  108966 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 14:20:27.300052  108966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 14:20:27.310454  108966 system_svc.go:56] duration metric: took 10.451274ms WaitForService to wait for kubelet.
	I0610 14:20:27.310481  108966 kubeadm.go:581] duration metric: took 32.940761727s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 14:20:27.310501  108966 node_conditions.go:102] verifying NodePressure condition ...
	I0610 14:20:27.494887  108966 request.go:628] Waited for 184.327168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0610 14:20:27.494955  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0610 14:20:27.494969  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:27.494981  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:27.494995  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:27.497233  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:27.497249  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:27.497256  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:27.497262  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:27.497267  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:27.497273  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:27 GMT
	I0610 14:20:27.497281  108966 round_trippers.go:580]     Audit-Id: 4abdabd8-f62a-464d-90e9-4bf65abf47d6
	I0610 14:20:27.497289  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:27.497439  108966 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0610 14:20:27.497813  108966 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0610 14:20:27.497832  108966 node_conditions.go:123] node cpu capacity is 8
	I0610 14:20:27.497846  108966 node_conditions.go:105] duration metric: took 187.34027ms to run NodePressure ...
	I0610 14:20:27.497859  108966 start.go:228] waiting for startup goroutines ...
	I0610 14:20:27.497873  108966 start.go:233] waiting for cluster config update ...
	I0610 14:20:27.497891  108966 start.go:242] writing updated cluster config ...
	I0610 14:20:27.500639  108966 out.go:177] 
	I0610 14:20:27.502515  108966 config.go:182] Loaded profile config "multinode-007346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:20:27.502598  108966 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/config.json ...
	I0610 14:20:27.504730  108966 out.go:177] * Starting worker node multinode-007346-m02 in cluster multinode-007346
	I0610 14:20:27.506322  108966 cache.go:122] Beginning downloading kic base image for docker with crio
	I0610 14:20:27.507940  108966 out.go:177] * Pulling base image ...
	I0610 14:20:27.510355  108966 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0610 14:20:27.510385  108966 cache.go:57] Caching tarball of preloaded images
	I0610 14:20:27.510448  108966 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 14:20:27.510512  108966 preload.go:174] Found /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 14:20:27.510527  108966 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on crio
	I0610 14:20:27.510613  108966 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/config.json ...
	I0610 14:20:27.526107  108966 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0610 14:20:27.526131  108966 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	I0610 14:20:27.526149  108966 cache.go:195] Successfully downloaded all kic artifacts
	I0610 14:20:27.526188  108966 start.go:364] acquiring machines lock for multinode-007346-m02: {Name:mk80d3d43ca7b237d95c62dd5386ad51a3052911 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:20:27.526310  108966 start.go:368] acquired machines lock for "multinode-007346-m02" in 86.072µs
	I0610 14:20:27.526340  108966 start.go:93] Provisioning new machine with config: &{Name:multinode-007346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-007346 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0610 14:20:27.526423  108966 start.go:125] createHost starting for "m02" (driver="docker")
	I0610 14:20:27.529013  108966 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0610 14:20:27.529117  108966 start.go:159] libmachine.API.Create for "multinode-007346" (driver="docker")
	I0610 14:20:27.529143  108966 client.go:168] LocalClient.Create starting
	I0610 14:20:27.529225  108966 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem
	I0610 14:20:27.529254  108966 main.go:141] libmachine: Decoding PEM data...
	I0610 14:20:27.529270  108966 main.go:141] libmachine: Parsing certificate...
	I0610 14:20:27.529318  108966 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem
	I0610 14:20:27.529337  108966 main.go:141] libmachine: Decoding PEM data...
	I0610 14:20:27.529351  108966 main.go:141] libmachine: Parsing certificate...
	I0610 14:20:27.529533  108966 cli_runner.go:164] Run: docker network inspect multinode-007346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 14:20:27.544684  108966 network_create.go:76] Found existing network {name:multinode-007346 subnet:0xc000df86c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0610 14:20:27.544713  108966 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-007346-m02" container
	I0610 14:20:27.544763  108966 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0610 14:20:27.559646  108966 cli_runner.go:164] Run: docker volume create multinode-007346-m02 --label name.minikube.sigs.k8s.io=multinode-007346-m02 --label created_by.minikube.sigs.k8s.io=true
	I0610 14:20:27.575405  108966 oci.go:103] Successfully created a docker volume multinode-007346-m02
	I0610 14:20:27.575481  108966 cli_runner.go:164] Run: docker run --rm --name multinode-007346-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-007346-m02 --entrypoint /usr/bin/test -v multinode-007346-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib
	I0610 14:20:28.051313  108966 oci.go:107] Successfully prepared a docker volume multinode-007346-m02
	I0610 14:20:28.051354  108966 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0610 14:20:28.051377  108966 kic.go:190] Starting extracting preloaded images to volume ...
	I0610 14:20:28.051443  108966 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-007346-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir
	I0610 14:20:32.817110  108966 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-007346-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir: (4.765623458s)
	I0610 14:20:32.817137  108966 kic.go:199] duration metric: took 4.765758 seconds to extract preloaded images to volume
	W0610 14:20:32.817261  108966 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0610 14:20:32.817344  108966 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0610 14:20:32.861596  108966 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-007346-m02 --name multinode-007346-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-007346-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-007346-m02 --network multinode-007346 --ip 192.168.58.3 --volume multinode-007346-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
	I0610 14:20:33.155177  108966 cli_runner.go:164] Run: docker container inspect multinode-007346-m02 --format={{.State.Running}}
	I0610 14:20:33.173622  108966 cli_runner.go:164] Run: docker container inspect multinode-007346-m02 --format={{.State.Status}}
	I0610 14:20:33.189900  108966 cli_runner.go:164] Run: docker exec multinode-007346-m02 stat /var/lib/dpkg/alternatives/iptables
	I0610 14:20:33.251865  108966 oci.go:144] the created container "multinode-007346-m02" has a running status.
	I0610 14:20:33.251901  108966 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346-m02/id_rsa...
	I0610 14:20:33.356801  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0610 14:20:33.356843  108966 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0610 14:20:33.375343  108966 cli_runner.go:164] Run: docker container inspect multinode-007346-m02 --format={{.State.Status}}
	I0610 14:20:33.390829  108966 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0610 14:20:33.390852  108966 kic_runner.go:114] Args: [docker exec --privileged multinode-007346-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0610 14:20:33.460724  108966 cli_runner.go:164] Run: docker container inspect multinode-007346-m02 --format={{.State.Status}}
	I0610 14:20:33.475676  108966 machine.go:88] provisioning docker machine ...
	I0610 14:20:33.475711  108966 ubuntu.go:169] provisioning hostname "multinode-007346-m02"
	I0610 14:20:33.475792  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346-m02
	I0610 14:20:33.492095  108966 main.go:141] libmachine: Using SSH client type: native
	I0610 14:20:33.492519  108966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0610 14:20:33.492535  108966 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-007346-m02 && echo "multinode-007346-m02" | sudo tee /etc/hostname
	I0610 14:20:33.493204  108966 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33514->127.0.0.1:32852: read: connection reset by peer
	I0610 14:20:36.616286  108966 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-007346-m02
	
	I0610 14:20:36.616364  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346-m02
	I0610 14:20:36.632281  108966 main.go:141] libmachine: Using SSH client type: native
	I0610 14:20:36.632663  108966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0610 14:20:36.632683  108966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-007346-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-007346-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-007346-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 14:20:36.746227  108966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 14:20:36.746259  108966 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15074-18675/.minikube CaCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15074-18675/.minikube}
	I0610 14:20:36.746276  108966 ubuntu.go:177] setting up certificates
	I0610 14:20:36.746284  108966 provision.go:83] configureAuth start
	I0610 14:20:36.746339  108966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-007346-m02
	I0610 14:20:36.761969  108966 provision.go:138] copyHostCerts
	I0610 14:20:36.762004  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem
	I0610 14:20:36.762028  108966 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem, removing ...
	I0610 14:20:36.762034  108966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem
	I0610 14:20:36.762094  108966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem (1078 bytes)
	I0610 14:20:36.762168  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem
	I0610 14:20:36.762189  108966 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem, removing ...
	I0610 14:20:36.762196  108966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem
	I0610 14:20:36.762253  108966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem (1123 bytes)
	I0610 14:20:36.762307  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem
	I0610 14:20:36.762326  108966 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem, removing ...
	I0610 14:20:36.762332  108966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem
	I0610 14:20:36.762353  108966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem (1675 bytes)
	I0610 14:20:36.762401  108966 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem org=jenkins.multinode-007346-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-007346-m02]
	I0610 14:20:36.930743  108966 provision.go:172] copyRemoteCerts
	I0610 14:20:36.930792  108966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 14:20:36.930822  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346-m02
	I0610 14:20:36.946829  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346-m02/id_rsa Username:docker}
	I0610 14:20:37.034228  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 14:20:37.034292  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 14:20:37.055339  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 14:20:37.055397  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0610 14:20:37.075780  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 14:20:37.075847  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 14:20:37.096787  108966 provision.go:86] duration metric: configureAuth took 350.491143ms
	I0610 14:20:37.096813  108966 ubuntu.go:193] setting minikube options for container-runtime
	I0610 14:20:37.096976  108966 config.go:182] Loaded profile config "multinode-007346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:20:37.097063  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346-m02
	I0610 14:20:37.112811  108966 main.go:141] libmachine: Using SSH client type: native
	I0610 14:20:37.113195  108966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0610 14:20:37.113211  108966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 14:20:37.307394  108966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 14:20:37.307422  108966 machine.go:91] provisioned docker machine in 3.831726653s
	I0610 14:20:37.307432  108966 client.go:171] LocalClient.Create took 9.778281191s
	I0610 14:20:37.307454  108966 start.go:167] duration metric: libmachine.API.Create for "multinode-007346" took 9.778336162s
	I0610 14:20:37.307464  108966 start.go:300] post-start starting for "multinode-007346-m02" (driver="docker")
	I0610 14:20:37.307472  108966 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 14:20:37.307536  108966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 14:20:37.307585  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346-m02
	I0610 14:20:37.323561  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346-m02/id_rsa Username:docker}
	I0610 14:20:37.410623  108966 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 14:20:37.413343  108966 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0610 14:20:37.413369  108966 command_runner.go:130] > NAME="Ubuntu"
	I0610 14:20:37.413377  108966 command_runner.go:130] > VERSION_ID="22.04"
	I0610 14:20:37.413386  108966 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0610 14:20:37.413396  108966 command_runner.go:130] > VERSION_CODENAME=jammy
	I0610 14:20:37.413408  108966 command_runner.go:130] > ID=ubuntu
	I0610 14:20:37.413418  108966 command_runner.go:130] > ID_LIKE=debian
	I0610 14:20:37.413424  108966 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0610 14:20:37.413429  108966 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0610 14:20:37.413437  108966 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0610 14:20:37.413446  108966 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0610 14:20:37.413450  108966 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0610 14:20:37.413515  108966 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0610 14:20:37.413541  108966 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0610 14:20:37.413549  108966 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0610 14:20:37.413554  108966 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0610 14:20:37.413563  108966 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/addons for local assets ...
	I0610 14:20:37.413617  108966 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/files for local assets ...
	I0610 14:20:37.413685  108966 filesync.go:149] local asset: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem -> 254852.pem in /etc/ssl/certs
	I0610 14:20:37.413698  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem -> /etc/ssl/certs/254852.pem
	I0610 14:20:37.413767  108966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 14:20:37.421271  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem --> /etc/ssl/certs/254852.pem (1708 bytes)
	I0610 14:20:37.441735  108966 start.go:303] post-start completed in 134.25897ms
	I0610 14:20:37.442027  108966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-007346-m02
	I0610 14:20:37.458116  108966 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/config.json ...
	I0610 14:20:37.458405  108966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 14:20:37.458446  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346-m02
	I0610 14:20:37.473482  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346-m02/id_rsa Username:docker}
	I0610 14:20:37.554669  108966 command_runner.go:130] > 21%!
	(MISSING)I0610 14:20:37.554743  108966 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0610 14:20:37.558500  108966 command_runner.go:130] > 232G
	I0610 14:20:37.558752  108966 start.go:128] duration metric: createHost completed in 10.032318536s
	I0610 14:20:37.558771  108966 start.go:83] releasing machines lock for "multinode-007346-m02", held for 10.032445214s
	I0610 14:20:37.558840  108966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-007346-m02
	I0610 14:20:37.577131  108966 out.go:177] * Found network options:
	I0610 14:20:37.579032  108966 out.go:177]   - NO_PROXY=192.168.58.2
	W0610 14:20:37.580798  108966 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 14:20:37.580829  108966 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 14:20:37.580890  108966 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 14:20:37.580922  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346-m02
	I0610 14:20:37.580997  108966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 14:20:37.581044  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346-m02
	I0610 14:20:37.596613  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346-m02/id_rsa Username:docker}
	I0610 14:20:37.597512  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346-m02/id_rsa Username:docker}
	I0610 14:20:37.809774  108966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 14:20:37.809778  108966 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 14:20:37.813547  108966 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0610 14:20:37.813573  108966 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0610 14:20:37.813584  108966 command_runner.go:130] > Device: b0h/176d	Inode: 801599      Links: 1
	I0610 14:20:37.813592  108966 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 14:20:37.813598  108966 command_runner.go:130] > Access: 2023-04-04 14:31:21.000000000 +0000
	I0610 14:20:37.813603  108966 command_runner.go:130] > Modify: 2023-04-04 14:31:21.000000000 +0000
	I0610 14:20:37.813608  108966 command_runner.go:130] > Change: 2023-06-10 14:01:36.108366698 +0000
	I0610 14:20:37.813613  108966 command_runner.go:130] >  Birth: 2023-06-10 14:01:36.108366698 +0000
	I0610 14:20:37.813769  108966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:20:37.830859  108966 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0610 14:20:37.830936  108966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:20:37.856775  108966 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0610 14:20:37.856845  108966 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0610 14:20:37.856858  108966 start.go:481] detecting cgroup driver to use...
	I0610 14:20:37.856894  108966 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0610 14:20:37.856941  108966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 14:20:37.869807  108966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 14:20:37.879126  108966 docker.go:193] disabling cri-docker service (if available) ...
	I0610 14:20:37.879176  108966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 14:20:37.890507  108966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 14:20:37.902324  108966 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 14:20:37.972837  108966 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 14:20:37.986464  108966 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0610 14:20:38.046422  108966 docker.go:209] disabling docker service ...
	I0610 14:20:38.046487  108966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 14:20:38.062893  108966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 14:20:38.072627  108966 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 14:20:38.150050  108966 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0610 14:20:38.150107  108966 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 14:20:38.160003  108966 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0610 14:20:38.225066  108966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 14:20:38.234567  108966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 14:20:38.246984  108966 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0610 14:20:38.247693  108966 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 14:20:38.247736  108966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:20:38.255972  108966 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 14:20:38.256016  108966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:20:38.264014  108966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:20:38.271930  108966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:20:38.279854  108966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 14:20:38.287513  108966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 14:20:38.293790  108966 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 14:20:38.294432  108966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 14:20:38.301190  108966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 14:20:38.371888  108966 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 14:20:38.455592  108966 start.go:528] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 14:20:38.455648  108966 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 14:20:38.459116  108966 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0610 14:20:38.459140  108966 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 14:20:38.459148  108966 command_runner.go:130] > Device: b9h/185d	Inode: 186         Links: 1
	I0610 14:20:38.459155  108966 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 14:20:38.459159  108966 command_runner.go:130] > Access: 2023-06-10 14:20:38.438241635 +0000
	I0610 14:20:38.459167  108966 command_runner.go:130] > Modify: 2023-06-10 14:20:38.438241635 +0000
	I0610 14:20:38.459175  108966 command_runner.go:130] > Change: 2023-06-10 14:20:38.438241635 +0000
	I0610 14:20:38.459180  108966 command_runner.go:130] >  Birth: -
	I0610 14:20:38.459197  108966 start.go:549] Will wait 60s for crictl version
	I0610 14:20:38.459238  108966 ssh_runner.go:195] Run: which crictl
	I0610 14:20:38.462236  108966 command_runner.go:130] > /usr/bin/crictl
	I0610 14:20:38.462314  108966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 14:20:38.491965  108966 command_runner.go:130] > Version:  0.1.0
	I0610 14:20:38.491986  108966 command_runner.go:130] > RuntimeName:  cri-o
	I0610 14:20:38.491991  108966 command_runner.go:130] > RuntimeVersion:  1.24.5
	I0610 14:20:38.491996  108966 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 14:20:38.492011  108966 start.go:565] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.5
	RuntimeApiVersion:  v1
	I0610 14:20:38.492058  108966 ssh_runner.go:195] Run: crio --version
	I0610 14:20:38.523599  108966 command_runner.go:130] > crio version 1.24.5
	I0610 14:20:38.523618  108966 command_runner.go:130] > Version:          1.24.5
	I0610 14:20:38.523624  108966 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0610 14:20:38.523628  108966 command_runner.go:130] > GitTreeState:     clean
	I0610 14:20:38.523634  108966 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0610 14:20:38.523638  108966 command_runner.go:130] > GoVersion:        go1.18.2
	I0610 14:20:38.523642  108966 command_runner.go:130] > Compiler:         gc
	I0610 14:20:38.523646  108966 command_runner.go:130] > Platform:         linux/amd64
	I0610 14:20:38.523655  108966 command_runner.go:130] > Linkmode:         dynamic
	I0610 14:20:38.523663  108966 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0610 14:20:38.523667  108966 command_runner.go:130] > SeccompEnabled:   true
	I0610 14:20:38.523671  108966 command_runner.go:130] > AppArmorEnabled:  false
	I0610 14:20:38.523730  108966 ssh_runner.go:195] Run: crio --version
	I0610 14:20:38.554970  108966 command_runner.go:130] > crio version 1.24.5
	I0610 14:20:38.554991  108966 command_runner.go:130] > Version:          1.24.5
	I0610 14:20:38.555001  108966 command_runner.go:130] > GitCommit:        b007cb6753d97de6218787b6894b0e3cc1dc8ecd
	I0610 14:20:38.555008  108966 command_runner.go:130] > GitTreeState:     clean
	I0610 14:20:38.555016  108966 command_runner.go:130] > BuildDate:        2023-04-04T14:31:22Z
	I0610 14:20:38.555022  108966 command_runner.go:130] > GoVersion:        go1.18.2
	I0610 14:20:38.555029  108966 command_runner.go:130] > Compiler:         gc
	I0610 14:20:38.555036  108966 command_runner.go:130] > Platform:         linux/amd64
	I0610 14:20:38.555044  108966 command_runner.go:130] > Linkmode:         dynamic
	I0610 14:20:38.555054  108966 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0610 14:20:38.555062  108966 command_runner.go:130] > SeccompEnabled:   true
	I0610 14:20:38.555066  108966 command_runner.go:130] > AppArmorEnabled:  false
	I0610 14:20:38.558254  108966 out.go:177] * Preparing Kubernetes v1.27.2 on CRI-O 1.24.5 ...
	I0610 14:20:38.559882  108966 out.go:177]   - env NO_PROXY=192.168.58.2
	I0610 14:20:38.561439  108966 cli_runner.go:164] Run: docker network inspect multinode-007346 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0610 14:20:38.576805  108966 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0610 14:20:38.580254  108966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 14:20:38.589911  108966 certs.go:56] Setting up /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346 for IP: 192.168.58.3
	I0610 14:20:38.589937  108966 certs.go:190] acquiring lock for shared ca certs: {Name:mk47e57fed67616a983122d88149f57794c568cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 14:20:38.590056  108966 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key
	I0610 14:20:38.590091  108966 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key
	I0610 14:20:38.590104  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 14:20:38.590118  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 14:20:38.590127  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 14:20:38.590142  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 14:20:38.590189  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485.pem (1338 bytes)
	W0610 14:20:38.590242  108966 certs.go:433] ignoring /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485_empty.pem, impossibly tiny 0 bytes
	I0610 14:20:38.590253  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 14:20:38.590277  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem (1078 bytes)
	I0610 14:20:38.590301  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem (1123 bytes)
	I0610 14:20:38.590322  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem (1675 bytes)
	I0610 14:20:38.590362  108966 certs.go:437] found cert: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem (1708 bytes)
	I0610 14:20:38.590388  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485.pem -> /usr/share/ca-certificates/25485.pem
	I0610 14:20:38.590402  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem -> /usr/share/ca-certificates/254852.pem
	I0610 14:20:38.590415  108966 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:20:38.590810  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 14:20:38.610982  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 14:20:38.630831  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 14:20:38.650318  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 14:20:38.669916  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/certs/25485.pem --> /usr/share/ca-certificates/25485.pem (1338 bytes)
	I0610 14:20:38.689458  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem --> /usr/share/ca-certificates/254852.pem (1708 bytes)
	I0610 14:20:38.708786  108966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 14:20:38.729567  108966 ssh_runner.go:195] Run: openssl version
	I0610 14:20:38.734773  108966 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0610 14:20:38.734944  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 14:20:38.742511  108966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:20:38.745315  108966 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 14:02 /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:20:38.745370  108966 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 14:02 /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:20:38.745407  108966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 14:20:38.751042  108966 command_runner.go:130] > b5213941
	I0610 14:20:38.751199  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 14:20:38.759168  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25485.pem && ln -fs /usr/share/ca-certificates/25485.pem /etc/ssl/certs/25485.pem"
	I0610 14:20:38.767150  108966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25485.pem
	I0610 14:20:38.770145  108966 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 14:07 /usr/share/ca-certificates/25485.pem
	I0610 14:20:38.770162  108966 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 14:07 /usr/share/ca-certificates/25485.pem
	I0610 14:20:38.770189  108966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25485.pem
	I0610 14:20:38.776487  108966 command_runner.go:130] > 51391683
	I0610 14:20:38.776686  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25485.pem /etc/ssl/certs/51391683.0"
	I0610 14:20:38.784850  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254852.pem && ln -fs /usr/share/ca-certificates/254852.pem /etc/ssl/certs/254852.pem"
	I0610 14:20:38.792684  108966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254852.pem
	I0610 14:20:38.795616  108966 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 14:07 /usr/share/ca-certificates/254852.pem
	I0610 14:20:38.795642  108966 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 14:07 /usr/share/ca-certificates/254852.pem
	I0610 14:20:38.795674  108966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254852.pem
	I0610 14:20:38.801453  108966 command_runner.go:130] > 3ec20f2e
	I0610 14:20:38.801609  108966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254852.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 14:20:38.809235  108966 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 14:20:38.812033  108966 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 14:20:38.812072  108966 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 14:20:38.812152  108966 ssh_runner.go:195] Run: crio config
	I0610 14:20:38.845703  108966 command_runner.go:130] ! time="2023-06-10 14:20:38.845333293Z" level=info msg="Starting CRI-O, version: 1.24.5, git: b007cb6753d97de6218787b6894b0e3cc1dc8ecd(clean)"
	I0610 14:20:38.845736  108966 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0610 14:20:38.850048  108966 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0610 14:20:38.850068  108966 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0610 14:20:38.850075  108966 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0610 14:20:38.850078  108966 command_runner.go:130] > #
	I0610 14:20:38.850085  108966 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0610 14:20:38.850094  108966 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0610 14:20:38.850100  108966 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0610 14:20:38.850113  108966 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0610 14:20:38.850124  108966 command_runner.go:130] > # reload'.
	I0610 14:20:38.850136  108966 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0610 14:20:38.850145  108966 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0610 14:20:38.850151  108966 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0610 14:20:38.850159  108966 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0610 14:20:38.850163  108966 command_runner.go:130] > [crio]
	I0610 14:20:38.850169  108966 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0610 14:20:38.850176  108966 command_runner.go:130] > # containers images, in this directory.
	I0610 14:20:38.850185  108966 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0610 14:20:38.850194  108966 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0610 14:20:38.850213  108966 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0610 14:20:38.850224  108966 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0610 14:20:38.850237  108966 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0610 14:20:38.850247  108966 command_runner.go:130] > # storage_driver = "vfs"
	I0610 14:20:38.850253  108966 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0610 14:20:38.850261  108966 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0610 14:20:38.850265  108966 command_runner.go:130] > # storage_option = [
	I0610 14:20:38.850270  108966 command_runner.go:130] > # ]
	I0610 14:20:38.850279  108966 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0610 14:20:38.850292  108966 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0610 14:20:38.850299  108966 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0610 14:20:38.850304  108966 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0610 14:20:38.850312  108966 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0610 14:20:38.850319  108966 command_runner.go:130] > # always happen on a node reboot
	I0610 14:20:38.850324  108966 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0610 14:20:38.850331  108966 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0610 14:20:38.850340  108966 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0610 14:20:38.850354  108966 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0610 14:20:38.850361  108966 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0610 14:20:38.850368  108966 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0610 14:20:38.850378  108966 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0610 14:20:38.850384  108966 command_runner.go:130] > # internal_wipe = true
	I0610 14:20:38.850389  108966 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0610 14:20:38.850397  108966 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0610 14:20:38.850405  108966 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0610 14:20:38.850413  108966 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0610 14:20:38.850427  108966 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0610 14:20:38.850434  108966 command_runner.go:130] > [crio.api]
	I0610 14:20:38.850439  108966 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0610 14:20:38.850446  108966 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0610 14:20:38.850451  108966 command_runner.go:130] > # IP address on which the stream server will listen.
	I0610 14:20:38.850458  108966 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0610 14:20:38.850464  108966 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0610 14:20:38.850472  108966 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0610 14:20:38.850476  108966 command_runner.go:130] > # stream_port = "0"
	I0610 14:20:38.850483  108966 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0610 14:20:38.850487  108966 command_runner.go:130] > # stream_enable_tls = false
	I0610 14:20:38.850497  108966 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0610 14:20:38.850503  108966 command_runner.go:130] > # stream_idle_timeout = ""
	I0610 14:20:38.850509  108966 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0610 14:20:38.850518  108966 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0610 14:20:38.850524  108966 command_runner.go:130] > # minutes.
	I0610 14:20:38.850529  108966 command_runner.go:130] > # stream_tls_cert = ""
	I0610 14:20:38.850537  108966 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0610 14:20:38.850547  108966 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0610 14:20:38.850554  108966 command_runner.go:130] > # stream_tls_key = ""
	I0610 14:20:38.850560  108966 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0610 14:20:38.850568  108966 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0610 14:20:38.850576  108966 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0610 14:20:38.850580  108966 command_runner.go:130] > # stream_tls_ca = ""
	I0610 14:20:38.850589  108966 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0610 14:20:38.850596  108966 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0610 14:20:38.850602  108966 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0610 14:20:38.850609  108966 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0610 14:20:38.850630  108966 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0610 14:20:38.850638  108966 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0610 14:20:38.850642  108966 command_runner.go:130] > [crio.runtime]
	I0610 14:20:38.850648  108966 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0610 14:20:38.850655  108966 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0610 14:20:38.850662  108966 command_runner.go:130] > # "nofile=1024:2048"
	I0610 14:20:38.850668  108966 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0610 14:20:38.850674  108966 command_runner.go:130] > # default_ulimits = [
	I0610 14:20:38.850680  108966 command_runner.go:130] > # ]
	I0610 14:20:38.850688  108966 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0610 14:20:38.850694  108966 command_runner.go:130] > # no_pivot = false
	I0610 14:20:38.850699  108966 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0610 14:20:38.850708  108966 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0610 14:20:38.850715  108966 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0610 14:20:38.850721  108966 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0610 14:20:38.850728  108966 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0610 14:20:38.850734  108966 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0610 14:20:38.850740  108966 command_runner.go:130] > # conmon = ""
	I0610 14:20:38.850744  108966 command_runner.go:130] > # Cgroup setting for conmon
	I0610 14:20:38.850753  108966 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0610 14:20:38.850759  108966 command_runner.go:130] > conmon_cgroup = "pod"
	I0610 14:20:38.850765  108966 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0610 14:20:38.850772  108966 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0610 14:20:38.850779  108966 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0610 14:20:38.850785  108966 command_runner.go:130] > # conmon_env = [
	I0610 14:20:38.850788  108966 command_runner.go:130] > # ]
	I0610 14:20:38.850797  108966 command_runner.go:130] > # Additional environment variables to set for all the
	I0610 14:20:38.850802  108966 command_runner.go:130] > # containers. These are overridden if set in the
	I0610 14:20:38.850807  108966 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0610 14:20:38.850813  108966 command_runner.go:130] > # default_env = [
	I0610 14:20:38.850817  108966 command_runner.go:130] > # ]
	I0610 14:20:38.850825  108966 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0610 14:20:38.850831  108966 command_runner.go:130] > # selinux = false
	I0610 14:20:38.850837  108966 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0610 14:20:38.850845  108966 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0610 14:20:38.850853  108966 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0610 14:20:38.850859  108966 command_runner.go:130] > # seccomp_profile = ""
	I0610 14:20:38.850865  108966 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0610 14:20:38.850872  108966 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0610 14:20:38.850881  108966 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0610 14:20:38.850885  108966 command_runner.go:130] > # which might increase security.
	I0610 14:20:38.850892  108966 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0610 14:20:38.850898  108966 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0610 14:20:38.850908  108966 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0610 14:20:38.850919  108966 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0610 14:20:38.850927  108966 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0610 14:20:38.850934  108966 command_runner.go:130] > # This option supports live configuration reload.
	I0610 14:20:38.850938  108966 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0610 14:20:38.850946  108966 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0610 14:20:38.850952  108966 command_runner.go:130] > # the cgroup blockio controller.
	I0610 14:20:38.850956  108966 command_runner.go:130] > # blockio_config_file = ""
	I0610 14:20:38.850965  108966 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0610 14:20:38.850968  108966 command_runner.go:130] > # irqbalance daemon.
	I0610 14:20:38.850976  108966 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0610 14:20:38.850984  108966 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0610 14:20:38.850991  108966 command_runner.go:130] > # This option supports live configuration reload.
	I0610 14:20:38.850997  108966 command_runner.go:130] > # rdt_config_file = ""
	I0610 14:20:38.851003  108966 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0610 14:20:38.851009  108966 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0610 14:20:38.851015  108966 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0610 14:20:38.851022  108966 command_runner.go:130] > # separate_pull_cgroup = ""
	I0610 14:20:38.851028  108966 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0610 14:20:38.851038  108966 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0610 14:20:38.851045  108966 command_runner.go:130] > # will be added.
	I0610 14:20:38.851049  108966 command_runner.go:130] > # default_capabilities = [
	I0610 14:20:38.851055  108966 command_runner.go:130] > # 	"CHOWN",
	I0610 14:20:38.851059  108966 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0610 14:20:38.851064  108966 command_runner.go:130] > # 	"FSETID",
	I0610 14:20:38.851068  108966 command_runner.go:130] > # 	"FOWNER",
	I0610 14:20:38.851073  108966 command_runner.go:130] > # 	"SETGID",
	I0610 14:20:38.851077  108966 command_runner.go:130] > # 	"SETUID",
	I0610 14:20:38.851083  108966 command_runner.go:130] > # 	"SETPCAP",
	I0610 14:20:38.851087  108966 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0610 14:20:38.851093  108966 command_runner.go:130] > # 	"KILL",
	I0610 14:20:38.851096  108966 command_runner.go:130] > # ]
	I0610 14:20:38.851106  108966 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0610 14:20:38.851114  108966 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0610 14:20:38.851121  108966 command_runner.go:130] > # add_inheritable_capabilities = true
	I0610 14:20:38.851126  108966 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0610 14:20:38.851134  108966 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0610 14:20:38.851143  108966 command_runner.go:130] > # default_sysctls = [
	I0610 14:20:38.851149  108966 command_runner.go:130] > # ]
	I0610 14:20:38.851154  108966 command_runner.go:130] > # List of devices on the host that a
	I0610 14:20:38.851162  108966 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0610 14:20:38.851168  108966 command_runner.go:130] > # allowed_devices = [
	I0610 14:20:38.851172  108966 command_runner.go:130] > # 	"/dev/fuse",
	I0610 14:20:38.851177  108966 command_runner.go:130] > # ]
	I0610 14:20:38.851182  108966 command_runner.go:130] > # List of additional devices. specified as
	I0610 14:20:38.851215  108966 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0610 14:20:38.851224  108966 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0610 14:20:38.851229  108966 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0610 14:20:38.851233  108966 command_runner.go:130] > # additional_devices = [
	I0610 14:20:38.851236  108966 command_runner.go:130] > # ]
	I0610 14:20:38.851242  108966 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0610 14:20:38.851249  108966 command_runner.go:130] > # cdi_spec_dirs = [
	I0610 14:20:38.851252  108966 command_runner.go:130] > # 	"/etc/cdi",
	I0610 14:20:38.851259  108966 command_runner.go:130] > # 	"/var/run/cdi",
	I0610 14:20:38.851262  108966 command_runner.go:130] > # ]
	I0610 14:20:38.851272  108966 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0610 14:20:38.851281  108966 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0610 14:20:38.851287  108966 command_runner.go:130] > # Defaults to false.
	I0610 14:20:38.851302  108966 command_runner.go:130] > # device_ownership_from_security_context = false
	I0610 14:20:38.851310  108966 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0610 14:20:38.851318  108966 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0610 14:20:38.851322  108966 command_runner.go:130] > # hooks_dir = [
	I0610 14:20:38.851329  108966 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0610 14:20:38.851333  108966 command_runner.go:130] > # ]
	I0610 14:20:38.851341  108966 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0610 14:20:38.851349  108966 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0610 14:20:38.851356  108966 command_runner.go:130] > # its default mounts from the following two files:
	I0610 14:20:38.851360  108966 command_runner.go:130] > #
	I0610 14:20:38.851368  108966 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0610 14:20:38.851376  108966 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0610 14:20:38.851385  108966 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0610 14:20:38.851390  108966 command_runner.go:130] > #
	I0610 14:20:38.851396  108966 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0610 14:20:38.851408  108966 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0610 14:20:38.851416  108966 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0610 14:20:38.851423  108966 command_runner.go:130] > #      only add mounts it finds in this file.
	I0610 14:20:38.851428  108966 command_runner.go:130] > #
	I0610 14:20:38.851432  108966 command_runner.go:130] > # default_mounts_file = ""
	I0610 14:20:38.851439  108966 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0610 14:20:38.851446  108966 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0610 14:20:38.851452  108966 command_runner.go:130] > # pids_limit = 0
	I0610 14:20:38.851458  108966 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0610 14:20:38.851466  108966 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0610 14:20:38.851475  108966 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0610 14:20:38.851484  108966 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0610 14:20:38.851490  108966 command_runner.go:130] > # log_size_max = -1
	I0610 14:20:38.851497  108966 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0610 14:20:38.851503  108966 command_runner.go:130] > # log_to_journald = false
	I0610 14:20:38.851508  108966 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0610 14:20:38.851515  108966 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0610 14:20:38.851520  108966 command_runner.go:130] > # Path to directory for container attach sockets.
	I0610 14:20:38.851530  108966 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0610 14:20:38.851538  108966 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0610 14:20:38.851544  108966 command_runner.go:130] > # bind_mount_prefix = ""
	I0610 14:20:38.851550  108966 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0610 14:20:38.851556  108966 command_runner.go:130] > # read_only = false
	I0610 14:20:38.851562  108966 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0610 14:20:38.851570  108966 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0610 14:20:38.851575  108966 command_runner.go:130] > # live configuration reload.
	I0610 14:20:38.851579  108966 command_runner.go:130] > # log_level = "info"
	I0610 14:20:38.851584  108966 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0610 14:20:38.851591  108966 command_runner.go:130] > # This option supports live configuration reload.
	I0610 14:20:38.851595  108966 command_runner.go:130] > # log_filter = ""
	I0610 14:20:38.851603  108966 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0610 14:20:38.851611  108966 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0610 14:20:38.851617  108966 command_runner.go:130] > # separated by comma.
	I0610 14:20:38.851621  108966 command_runner.go:130] > # uid_mappings = ""
	I0610 14:20:38.851629  108966 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0610 14:20:38.851637  108966 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0610 14:20:38.851646  108966 command_runner.go:130] > # separated by comma.
	I0610 14:20:38.851652  108966 command_runner.go:130] > # gid_mappings = ""
	I0610 14:20:38.851658  108966 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0610 14:20:38.851666  108966 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0610 14:20:38.851672  108966 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0610 14:20:38.851678  108966 command_runner.go:130] > # minimum_mappable_uid = -1
	I0610 14:20:38.851684  108966 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0610 14:20:38.851692  108966 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0610 14:20:38.851701  108966 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0610 14:20:38.851707  108966 command_runner.go:130] > # minimum_mappable_gid = -1
	I0610 14:20:38.851713  108966 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0610 14:20:38.851721  108966 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0610 14:20:38.851729  108966 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0610 14:20:38.851735  108966 command_runner.go:130] > # ctr_stop_timeout = 30
	I0610 14:20:38.851741  108966 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0610 14:20:38.851751  108966 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0610 14:20:38.851756  108966 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0610 14:20:38.851762  108966 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0610 14:20:38.851768  108966 command_runner.go:130] > # drop_infra_ctr = true
	I0610 14:20:38.851777  108966 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0610 14:20:38.851784  108966 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0610 14:20:38.851794  108966 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0610 14:20:38.851800  108966 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0610 14:20:38.851805  108966 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0610 14:20:38.851811  108966 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0610 14:20:38.851816  108966 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0610 14:20:38.851822  108966 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0610 14:20:38.851828  108966 command_runner.go:130] > # pinns_path = ""
	I0610 14:20:38.851834  108966 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0610 14:20:38.851843  108966 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0610 14:20:38.851852  108966 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0610 14:20:38.851858  108966 command_runner.go:130] > # default_runtime = "runc"
	I0610 14:20:38.851863  108966 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0610 14:20:38.851872  108966 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0610 14:20:38.851883  108966 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0610 14:20:38.851890  108966 command_runner.go:130] > # creation as a file is not desired either.
	I0610 14:20:38.851904  108966 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0610 14:20:38.851912  108966 command_runner.go:130] > # the hostname is being managed dynamically.
	I0610 14:20:38.851919  108966 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0610 14:20:38.851922  108966 command_runner.go:130] > # ]
	I0610 14:20:38.851931  108966 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0610 14:20:38.851939  108966 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0610 14:20:38.851946  108966 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0610 14:20:38.851954  108966 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0610 14:20:38.851960  108966 command_runner.go:130] > #
	I0610 14:20:38.851964  108966 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0610 14:20:38.851971  108966 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0610 14:20:38.851975  108966 command_runner.go:130] > #  runtime_type = "oci"
	I0610 14:20:38.851981  108966 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0610 14:20:38.851986  108966 command_runner.go:130] > #  privileged_without_host_devices = false
	I0610 14:20:38.851993  108966 command_runner.go:130] > #  allowed_annotations = []
	I0610 14:20:38.851997  108966 command_runner.go:130] > # Where:
	I0610 14:20:38.852004  108966 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0610 14:20:38.852013  108966 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0610 14:20:38.852023  108966 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0610 14:20:38.852029  108966 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0610 14:20:38.852035  108966 command_runner.go:130] > #   in $PATH.
	I0610 14:20:38.852041  108966 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0610 14:20:38.852048  108966 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0610 14:20:38.852055  108966 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0610 14:20:38.852060  108966 command_runner.go:130] > #   state.
	I0610 14:20:38.852066  108966 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0610 14:20:38.852074  108966 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0610 14:20:38.852083  108966 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0610 14:20:38.852091  108966 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0610 14:20:38.852099  108966 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0610 14:20:38.852106  108966 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0610 14:20:38.852113  108966 command_runner.go:130] > #   The currently recognized values are:
	I0610 14:20:38.852119  108966 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0610 14:20:38.852127  108966 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0610 14:20:38.852135  108966 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0610 14:20:38.852143  108966 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0610 14:20:38.852155  108966 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0610 14:20:38.852164  108966 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0610 14:20:38.852172  108966 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0610 14:20:38.852181  108966 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0610 14:20:38.852188  108966 command_runner.go:130] > #   should be moved to the container's cgroup
	I0610 14:20:38.852196  108966 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0610 14:20:38.852200  108966 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0610 14:20:38.852206  108966 command_runner.go:130] > runtime_type = "oci"
	I0610 14:20:38.852211  108966 command_runner.go:130] > runtime_root = "/run/runc"
	I0610 14:20:38.852217  108966 command_runner.go:130] > runtime_config_path = ""
	I0610 14:20:38.852221  108966 command_runner.go:130] > monitor_path = ""
	I0610 14:20:38.852227  108966 command_runner.go:130] > monitor_cgroup = ""
	I0610 14:20:38.852232  108966 command_runner.go:130] > monitor_exec_cgroup = ""
	I0610 14:20:38.852279  108966 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0610 14:20:38.852295  108966 command_runner.go:130] > # running containers
	I0610 14:20:38.852300  108966 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0610 14:20:38.852306  108966 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0610 14:20:38.852315  108966 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0610 14:20:38.852323  108966 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0610 14:20:38.852328  108966 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0610 14:20:38.852335  108966 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0610 14:20:38.852340  108966 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0610 14:20:38.852347  108966 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0610 14:20:38.852351  108966 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0610 14:20:38.852358  108966 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0610 14:20:38.852364  108966 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0610 14:20:38.852371  108966 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0610 14:20:38.852380  108966 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0610 14:20:38.852389  108966 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0610 14:20:38.852396  108966 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0610 14:20:38.852404  108966 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0610 14:20:38.852416  108966 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0610 14:20:38.852425  108966 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0610 14:20:38.852433  108966 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0610 14:20:38.852442  108966 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0610 14:20:38.852448  108966 command_runner.go:130] > # Example:
	I0610 14:20:38.852453  108966 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0610 14:20:38.852461  108966 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0610 14:20:38.852466  108966 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0610 14:20:38.852473  108966 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0610 14:20:38.852476  108966 command_runner.go:130] > # cpuset = 0
	I0610 14:20:38.852480  108966 command_runner.go:130] > # cpushares = "0-1"
	I0610 14:20:38.852486  108966 command_runner.go:130] > # Where:
	I0610 14:20:38.852490  108966 command_runner.go:130] > # The workload name is workload-type.
	I0610 14:20:38.852499  108966 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0610 14:20:38.852507  108966 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0610 14:20:38.852512  108966 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0610 14:20:38.852523  108966 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0610 14:20:38.852531  108966 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0610 14:20:38.852537  108966 command_runner.go:130] > # 
	I0610 14:20:38.852543  108966 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0610 14:20:38.852548  108966 command_runner.go:130] > #
	I0610 14:20:38.852554  108966 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0610 14:20:38.852562  108966 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0610 14:20:38.852569  108966 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0610 14:20:38.852578  108966 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0610 14:20:38.852586  108966 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0610 14:20:38.852591  108966 command_runner.go:130] > [crio.image]
	I0610 14:20:38.852597  108966 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0610 14:20:38.852604  108966 command_runner.go:130] > # default_transport = "docker://"
	I0610 14:20:38.852609  108966 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0610 14:20:38.852617  108966 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0610 14:20:38.852624  108966 command_runner.go:130] > # global_auth_file = ""
	I0610 14:20:38.852629  108966 command_runner.go:130] > # The image used to instantiate infra containers.
	I0610 14:20:38.852637  108966 command_runner.go:130] > # This option supports live configuration reload.
	I0610 14:20:38.852644  108966 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0610 14:20:38.852650  108966 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0610 14:20:38.852658  108966 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0610 14:20:38.852663  108966 command_runner.go:130] > # This option supports live configuration reload.
	I0610 14:20:38.852669  108966 command_runner.go:130] > # pause_image_auth_file = ""
	I0610 14:20:38.852674  108966 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0610 14:20:38.852683  108966 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0610 14:20:38.852692  108966 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0610 14:20:38.852700  108966 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0610 14:20:38.852706  108966 command_runner.go:130] > # pause_command = "/pause"
	I0610 14:20:38.852712  108966 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0610 14:20:38.852720  108966 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0610 14:20:38.852728  108966 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0610 14:20:38.852734  108966 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0610 14:20:38.852741  108966 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0610 14:20:38.852745  108966 command_runner.go:130] > # signature_policy = ""
	I0610 14:20:38.852758  108966 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0610 14:20:38.852766  108966 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0610 14:20:38.852772  108966 command_runner.go:130] > # changing them here.
	I0610 14:20:38.852776  108966 command_runner.go:130] > # insecure_registries = [
	I0610 14:20:38.852782  108966 command_runner.go:130] > # ]
	I0610 14:20:38.852788  108966 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0610 14:20:38.852795  108966 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0610 14:20:38.852803  108966 command_runner.go:130] > # image_volumes = "mkdir"
	I0610 14:20:38.852808  108966 command_runner.go:130] > # Temporary directory to use for storing big files
	I0610 14:20:38.852815  108966 command_runner.go:130] > # big_files_temporary_dir = ""
	I0610 14:20:38.852821  108966 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0610 14:20:38.852827  108966 command_runner.go:130] > # CNI plugins.
	I0610 14:20:38.852831  108966 command_runner.go:130] > [crio.network]
	I0610 14:20:38.852839  108966 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0610 14:20:38.852847  108966 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0610 14:20:38.852853  108966 command_runner.go:130] > # cni_default_network = ""
	I0610 14:20:38.852859  108966 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0610 14:20:38.852865  108966 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0610 14:20:38.852870  108966 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0610 14:20:38.852877  108966 command_runner.go:130] > # plugin_dirs = [
	I0610 14:20:38.852881  108966 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0610 14:20:38.852887  108966 command_runner.go:130] > # ]
	I0610 14:20:38.852892  108966 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0610 14:20:38.852896  108966 command_runner.go:130] > [crio.metrics]
	I0610 14:20:38.852903  108966 command_runner.go:130] > # Globally enable or disable metrics support.
	I0610 14:20:38.852907  108966 command_runner.go:130] > # enable_metrics = false
	I0610 14:20:38.852914  108966 command_runner.go:130] > # Specify enabled metrics collectors.
	I0610 14:20:38.852919  108966 command_runner.go:130] > # Per default all metrics are enabled.
	I0610 14:20:38.852927  108966 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0610 14:20:38.852935  108966 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0610 14:20:38.852943  108966 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0610 14:20:38.852949  108966 command_runner.go:130] > # metrics_collectors = [
	I0610 14:20:38.852953  108966 command_runner.go:130] > # 	"operations",
	I0610 14:20:38.852960  108966 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0610 14:20:38.852964  108966 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0610 14:20:38.852971  108966 command_runner.go:130] > # 	"operations_errors",
	I0610 14:20:38.852975  108966 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0610 14:20:38.852982  108966 command_runner.go:130] > # 	"image_pulls_by_name",
	I0610 14:20:38.852986  108966 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0610 14:20:38.852993  108966 command_runner.go:130] > # 	"image_pulls_failures",
	I0610 14:20:38.852997  108966 command_runner.go:130] > # 	"image_pulls_successes",
	I0610 14:20:38.853003  108966 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0610 14:20:38.853007  108966 command_runner.go:130] > # 	"image_layer_reuse",
	I0610 14:20:38.853013  108966 command_runner.go:130] > # 	"containers_oom_total",
	I0610 14:20:38.853017  108966 command_runner.go:130] > # 	"containers_oom",
	I0610 14:20:38.853023  108966 command_runner.go:130] > # 	"processes_defunct",
	I0610 14:20:38.853027  108966 command_runner.go:130] > # 	"operations_total",
	I0610 14:20:38.853034  108966 command_runner.go:130] > # 	"operations_latency_seconds",
	I0610 14:20:38.853038  108966 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0610 14:20:38.853044  108966 command_runner.go:130] > # 	"operations_errors_total",
	I0610 14:20:38.853049  108966 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0610 14:20:38.853055  108966 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0610 14:20:38.853059  108966 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0610 14:20:38.853066  108966 command_runner.go:130] > # 	"image_pulls_success_total",
	I0610 14:20:38.853070  108966 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0610 14:20:38.853076  108966 command_runner.go:130] > # 	"containers_oom_count_total",
	I0610 14:20:38.853080  108966 command_runner.go:130] > # ]
	I0610 14:20:38.853087  108966 command_runner.go:130] > # The port on which the metrics server will listen.
	I0610 14:20:38.853091  108966 command_runner.go:130] > # metrics_port = 9090
	I0610 14:20:38.853098  108966 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0610 14:20:38.853104  108966 command_runner.go:130] > # metrics_socket = ""
	I0610 14:20:38.853109  108966 command_runner.go:130] > # The certificate for the secure metrics server.
	I0610 14:20:38.853117  108966 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0610 14:20:38.853126  108966 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0610 14:20:38.853134  108966 command_runner.go:130] > # certificate on any modification event.
	I0610 14:20:38.853138  108966 command_runner.go:130] > # metrics_cert = ""
	I0610 14:20:38.853145  108966 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0610 14:20:38.853150  108966 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0610 14:20:38.853156  108966 command_runner.go:130] > # metrics_key = ""
	I0610 14:20:38.853161  108966 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0610 14:20:38.853167  108966 command_runner.go:130] > [crio.tracing]
	I0610 14:20:38.853173  108966 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0610 14:20:38.853177  108966 command_runner.go:130] > # enable_tracing = false
	I0610 14:20:38.853184  108966 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0610 14:20:38.853191  108966 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0610 14:20:38.853197  108966 command_runner.go:130] > # Number of samples to collect per million spans.
	I0610 14:20:38.853204  108966 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0610 14:20:38.853210  108966 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0610 14:20:38.853216  108966 command_runner.go:130] > [crio.stats]
	I0610 14:20:38.853221  108966 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0610 14:20:38.853229  108966 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0610 14:20:38.853235  108966 command_runner.go:130] > # stats_collection_period = 0
	I0610 14:20:38.853293  108966 cni.go:84] Creating CNI manager for ""
	I0610 14:20:38.853301  108966 cni.go:136] 2 nodes found, recommending kindnet
	I0610 14:20:38.853309  108966 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 14:20:38.853328  108966 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-007346 NodeName:multinode-007346-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 14:20:38.853439  108966 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-007346-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 14:20:38.853486  108966 kubeadm.go:971] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-007346-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-007346 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 14:20:38.853529  108966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 14:20:38.860900  108966 command_runner.go:130] > kubeadm
	I0610 14:20:38.860915  108966 command_runner.go:130] > kubectl
	I0610 14:20:38.860919  108966 command_runner.go:130] > kubelet
	I0610 14:20:38.861509  108966 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 14:20:38.861575  108966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0610 14:20:38.868959  108966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0610 14:20:38.883746  108966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 14:20:38.899415  108966 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0610 14:20:38.902365  108966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 14:20:38.911357  108966 host.go:66] Checking if "multinode-007346" exists ...
	I0610 14:20:38.911590  108966 config.go:182] Loaded profile config "multinode-007346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:20:38.911589  108966 start.go:301] JoinCluster: &{Name:multinode-007346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-007346 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:20:38.911689  108966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 14:20:38.911738  108966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:20:38.927783  108966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa Username:docker}
	I0610 14:20:39.063911  108966 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dqik7x.6hj0u6kndytn7la4 --discovery-token-ca-cert-hash sha256:f7c27fba2457aced24afc8e692292ec6bc66665a6c8292c6979f6ce9f519ecd4 
	I0610 14:20:39.068435  108966 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0610 14:20:39.068470  108966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dqik7x.6hj0u6kndytn7la4 --discovery-token-ca-cert-hash sha256:f7c27fba2457aced24afc8e692292ec6bc66665a6c8292c6979f6ce9f519ecd4 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-007346-m02"
	I0610 14:20:39.100347  108966 command_runner.go:130] ! W0610 14:20:39.099905    1102 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0610 14:20:39.126885  108966 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1035-gcp\n", err: exit status 1
	I0610 14:20:39.189565  108966 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 14:20:41.311314  108966 command_runner.go:130] > [preflight] Running pre-flight checks
	I0610 14:20:41.311341  108966 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0610 14:20:41.311350  108966 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1035-gcp
	I0610 14:20:41.311358  108966 command_runner.go:130] > OS: Linux
	I0610 14:20:41.311366  108966 command_runner.go:130] > CGROUPS_CPU: enabled
	I0610 14:20:41.311374  108966 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0610 14:20:41.311381  108966 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0610 14:20:41.311389  108966 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0610 14:20:41.311398  108966 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0610 14:20:41.311416  108966 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0610 14:20:41.311429  108966 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0610 14:20:41.311442  108966 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0610 14:20:41.311454  108966 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0610 14:20:41.311466  108966 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0610 14:20:41.311483  108966 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0610 14:20:41.311497  108966 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 14:20:41.311510  108966 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 14:20:41.311521  108966 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0610 14:20:41.311561  108966 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0610 14:20:41.311574  108966 command_runner.go:130] > This node has joined the cluster:
	I0610 14:20:41.311585  108966 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0610 14:20:41.311598  108966 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0610 14:20:41.311612  108966 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0610 14:20:41.311640  108966 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dqik7x.6hj0u6kndytn7la4 --discovery-token-ca-cert-hash sha256:f7c27fba2457aced24afc8e692292ec6bc66665a6c8292c6979f6ce9f519ecd4 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-007346-m02": (2.243155828s)
	I0610 14:20:41.311670  108966 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 14:20:41.464836  108966 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0610 14:20:41.464879  108966 start.go:303] JoinCluster complete in 2.553289703s
	I0610 14:20:41.464898  108966 cni.go:84] Creating CNI manager for ""
	I0610 14:20:41.464902  108966 cni.go:136] 2 nodes found, recommending kindnet
	I0610 14:20:41.464946  108966 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 14:20:41.468238  108966 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0610 14:20:41.468259  108966 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I0610 14:20:41.468269  108966 command_runner.go:130] > Device: 37h/55d	Inode: 802287      Links: 1
	I0610 14:20:41.468277  108966 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 14:20:41.468285  108966 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0610 14:20:41.468292  108966 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0610 14:20:41.468300  108966 command_runner.go:130] > Change: 2023-06-10 14:01:36.496408099 +0000
	I0610 14:20:41.468313  108966 command_runner.go:130] >  Birth: 2023-06-10 14:01:36.472405538 +0000
	I0610 14:20:41.468392  108966 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0610 14:20:41.468406  108966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 14:20:41.483323  108966 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 14:20:41.708661  108966 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0610 14:20:41.712151  108966 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0610 14:20:41.715863  108966 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0610 14:20:41.727623  108966 command_runner.go:130] > daemonset.apps/kindnet configured
	I0610 14:20:41.732490  108966 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:20:41.732791  108966 kapi.go:59] client config for multinode-007346: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.crt", KeyFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.key", CAFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bb8e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 14:20:41.733058  108966 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0610 14:20:41.733071  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:41.733078  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:41.733087  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:41.734894  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:41.734914  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:41.734924  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:41 GMT
	I0610 14:20:41.734934  108966 round_trippers.go:580]     Audit-Id: 6efd8fea-11f2-4944-9a78-af978937c5b2
	I0610 14:20:41.734942  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:41.734953  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:41.734967  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:41.734977  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:41.734990  108966 round_trippers.go:580]     Content-Length: 291
	I0610 14:20:41.735014  108966 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8a850b2a-5b13-4da4-8ed3-89b9bb9201e5","resourceVersion":"448","creationTimestamp":"2023-06-10T14:19:39Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0610 14:20:41.735107  108966 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-007346" context rescaled to 1 replicas
	I0610 14:20:41.735139  108966 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0610 14:20:41.738025  108966 out.go:177] * Verifying Kubernetes components...
	I0610 14:20:41.739544  108966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 14:20:41.750317  108966 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:20:41.750584  108966 kapi.go:59] client config for multinode-007346: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.crt", KeyFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/profiles/multinode-007346/client.key", CAFile:"/home/jenkins/minikube-integration/15074-18675/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bb8e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 14:20:41.750866  108966 node_ready.go:35] waiting up to 6m0s for node "multinode-007346-m02" to be "Ready" ...
	I0610 14:20:41.750932  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:41.750943  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:41.750954  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:41.750967  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:41.753088  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:41.753108  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:41.753118  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:41.753127  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:41.753135  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:41.753144  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:41.753156  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:41 GMT
	I0610 14:20:41.753168  108966 round_trippers.go:580]     Audit-Id: 9cf131a6-2e8f-45b9-84d8-d884319cccf5
	I0610 14:20:41.753287  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"483","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0610 14:20:42.254351  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:42.254370  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:42.254378  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:42.254384  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:42.256654  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:42.256678  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:42.256684  108966 round_trippers.go:580]     Audit-Id: 739da854-eefc-4582-b261-efdf274e1058
	I0610 14:20:42.256690  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:42.256698  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:42.256706  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:42.256716  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:42.256729  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:42 GMT
	I0610 14:20:42.256848  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"483","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0610 14:20:42.754466  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:42.754485  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:42.754492  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:42.754499  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:42.756458  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:42.756485  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:42.756496  108966 round_trippers.go:580]     Audit-Id: 25e3a166-acec-47ac-b84b-3d8deb1be9e8
	I0610 14:20:42.756505  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:42.756516  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:42.756528  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:42.756545  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:42.756560  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:42 GMT
	I0610 14:20:42.756683  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"483","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0610 14:20:43.253880  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:43.253907  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:43.253915  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:43.253921  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:43.256237  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:43.256262  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:43.256273  108966 round_trippers.go:580]     Audit-Id: bb35a030-71ca-474e-b0a7-fa7286f6a100
	I0610 14:20:43.256281  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:43.256291  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:43.256300  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:43.256314  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:43.256323  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:43 GMT
	I0610 14:20:43.256465  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:43.754065  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:43.754084  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:43.754091  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:43.754098  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:43.756549  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:43.756575  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:43.756584  108966 round_trippers.go:580]     Audit-Id: 921d5124-18bd-4d56-86bd-372f28e2d3f3
	I0610 14:20:43.756592  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:43.756598  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:43.756607  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:43.756616  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:43.756625  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:43 GMT
	I0610 14:20:43.756846  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:43.757125  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:20:44.254745  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:44.254778  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:44.254786  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:44.254792  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:44.257221  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:44.257245  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:44.257254  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:44.257262  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:44 GMT
	I0610 14:20:44.257269  108966 round_trippers.go:580]     Audit-Id: ae717556-40a4-4a03-b6e8-5e6376691f14
	I0610 14:20:44.257277  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:44.257347  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:44.257364  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:44.257463  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:44.753976  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:44.753999  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:44.754007  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:44.754013  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:44.756304  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:44.756326  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:44.756334  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:44 GMT
	I0610 14:20:44.756340  108966 round_trippers.go:580]     Audit-Id: 87387d02-46c5-4d80-9469-ba65d9bf0c63
	I0610 14:20:44.756349  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:44.756358  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:44.756367  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:44.756376  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:44.756487  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:45.254043  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:45.254066  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:45.254079  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:45.254088  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:45.256221  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:45.256240  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:45.256247  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:45.256253  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:45.256258  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:45.256265  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:45.256271  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:45 GMT
	I0610 14:20:45.256277  108966 round_trippers.go:580]     Audit-Id: b770cce7-840a-456b-92f7-6913aa441cf6
	I0610 14:20:45.256388  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:45.753857  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:45.753878  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:45.753886  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:45.753892  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:45.756082  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:45.756098  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:45.756104  108966 round_trippers.go:580]     Audit-Id: 15690a67-abcf-422a-a0ba-3736b910fc8b
	I0610 14:20:45.756110  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:45.756115  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:45.756120  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:45.756126  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:45.756132  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:45 GMT
	I0610 14:20:45.756272  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:46.254701  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:46.254719  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:46.254727  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:46.254733  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:46.256960  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:46.256982  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:46.256989  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:46.256994  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:46.257000  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:46.257005  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:46.257011  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:46 GMT
	I0610 14:20:46.257016  108966 round_trippers.go:580]     Audit-Id: 1907594b-d433-468b-8c16-2a6178ef1eeb
	I0610 14:20:46.257182  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:46.257479  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:20:46.753766  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:46.753788  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:46.753797  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:46.753804  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:46.756052  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:46.756077  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:46.756088  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:46.756097  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:46.756106  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:46.756115  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:46.756124  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:46 GMT
	I0610 14:20:46.756132  108966 round_trippers.go:580]     Audit-Id: 5723c6ae-fec7-4664-afd3-cbf3ee600dbf
	I0610 14:20:46.756273  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:47.253848  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:47.253872  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:47.253880  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:47.253886  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:47.256095  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:47.256112  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:47.256118  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:47 GMT
	I0610 14:20:47.256124  108966 round_trippers.go:580]     Audit-Id: 1fa10a12-4c6c-47e1-b68f-2573bef23b38
	I0610 14:20:47.256129  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:47.256134  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:47.256139  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:47.256145  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:47.256269  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:47.753800  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:47.753820  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:47.753832  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:47.753838  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:47.756087  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:47.756111  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:47.756120  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:47.756129  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:47 GMT
	I0610 14:20:47.756138  108966 round_trippers.go:580]     Audit-Id: 2aabea16-7c87-43ee-8de0-9c492aa822ed
	I0610 14:20:47.756147  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:47.756155  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:47.756168  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:47.756340  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:48.253886  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:48.253905  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:48.253912  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:48.253919  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:48.256496  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:48.256513  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:48.256520  108966 round_trippers.go:580]     Audit-Id: 78148c21-7e90-4a0c-95fe-fd8dce3117fa
	I0610 14:20:48.256525  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:48.256531  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:48.256539  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:48.256547  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:48.256558  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:48 GMT
	I0610 14:20:48.256652  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:48.754317  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:48.754337  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:48.754346  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:48.754352  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:48.756522  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:48.756543  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:48.756551  108966 round_trippers.go:580]     Audit-Id: 82920d96-35af-45ad-bc03-723cdaf290d7
	I0610 14:20:48.756557  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:48.756562  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:48.756567  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:48.756572  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:48.756578  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:48 GMT
	I0610 14:20:48.756697  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:48.757025  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:20:49.254334  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:49.254355  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:49.254364  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:49.254370  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:49.256633  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:49.256654  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:49.256665  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:49.256675  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:49.256683  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:49 GMT
	I0610 14:20:49.256696  108966 round_trippers.go:580]     Audit-Id: 3717ce40-d5b2-489c-b14a-67c54279839c
	I0610 14:20:49.256708  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:49.256720  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:49.256820  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:49.754452  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:49.754474  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:49.754482  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:49.754488  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:49.756753  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:49.756774  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:49.756781  108966 round_trippers.go:580]     Audit-Id: 001d4e82-8c8c-42b8-9ade-95a9c72f60a7
	I0610 14:20:49.756789  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:49.756797  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:49.756806  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:49.756818  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:49.756828  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:49 GMT
	I0610 14:20:49.756950  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:50.254543  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:50.254563  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:50.254571  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:50.254581  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:50.256663  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:50.256681  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:50.256687  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:50 GMT
	I0610 14:20:50.256693  108966 round_trippers.go:580]     Audit-Id: e6ea1cb5-bec0-4af5-a412-17cddb7a258e
	I0610 14:20:50.256698  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:50.256703  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:50.256708  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:50.256714  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:50.256854  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:50.754491  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:50.754513  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:50.754524  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:50.754532  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:50.757172  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:50.757196  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:50.757212  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:50.757221  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:50.757229  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:50 GMT
	I0610 14:20:50.757241  108966 round_trippers.go:580]     Audit-Id: 96c98528-94b3-4689-8953-0f6e55d76ab2
	I0610 14:20:50.757250  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:50.757263  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:50.757386  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"499","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0610 14:20:50.757686  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:20:51.254494  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:51.254512  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:51.254520  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:51.254527  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:51.256681  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:51.256701  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:51.256712  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:51.256721  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:51.256731  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:51.256744  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:51.256756  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:51 GMT
	I0610 14:20:51.256767  108966 round_trippers.go:580]     Audit-Id: 43dd1cd0-9cbe-4429-8ac0-959c35e0bb13
	I0610 14:20:51.256888  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:51.754429  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:51.754449  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:51.754461  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:51.754467  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:51.756965  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:51.756985  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:51.756994  108966 round_trippers.go:580]     Audit-Id: 40a392f9-f380-4e7c-96a9-a5076c183808
	I0610 14:20:51.757002  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:51.757010  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:51.757018  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:51.757030  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:51.757043  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:51 GMT
	I0610 14:20:51.757135  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:52.254734  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:52.254756  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:52.254764  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:52.254770  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:52.256931  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:52.256951  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:52.256958  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:52.256969  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:52 GMT
	I0610 14:20:52.256975  108966 round_trippers.go:580]     Audit-Id: 399bc72c-7a3a-4477-8f30-54302101dd83
	I0610 14:20:52.256980  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:52.256986  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:52.256991  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:52.257142  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:52.754824  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:52.754848  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:52.754859  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:52.754923  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:52.757165  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:52.757189  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:52.757199  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:52.757208  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:52 GMT
	I0610 14:20:52.757217  108966 round_trippers.go:580]     Audit-Id: 760dcd2d-955a-4753-a4e0-b8a2f50da616
	I0610 14:20:52.757225  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:52.757233  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:52.757246  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:52.757357  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:53.253896  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:53.253916  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:53.253924  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:53.253931  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:53.256169  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:53.256191  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:53.256201  108966 round_trippers.go:580]     Audit-Id: 20bc56b9-4875-4832-80fa-900747b5f2d3
	I0610 14:20:53.256209  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:53.256218  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:53.256225  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:53.256233  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:53.256242  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:53 GMT
	I0610 14:20:53.256346  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:53.256614  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:20:53.753853  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:53.753871  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:53.753879  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:53.753885  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:53.756039  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:53.756059  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:53.756065  108966 round_trippers.go:580]     Audit-Id: 005b1c75-6f9d-4c16-bc79-d416871e22bd
	I0610 14:20:53.756071  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:53.756077  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:53.756082  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:53.756087  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:53.756092  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:53 GMT
	I0610 14:20:53.756238  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:54.253921  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:54.253941  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:54.253949  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:54.253956  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:54.256242  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:54.256264  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:54.256276  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:54.256285  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:54.256293  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:54.256300  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:54.256308  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:54 GMT
	I0610 14:20:54.256317  108966 round_trippers.go:580]     Audit-Id: a02535cc-9a82-445a-89ef-149080f67998
	I0610 14:20:54.256458  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:54.753982  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:54.754002  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:54.754009  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:54.754016  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:54.756270  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:54.756288  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:54.756295  108966 round_trippers.go:580]     Audit-Id: c6df8c2b-a235-479b-ace2-931bd0f71244
	I0610 14:20:54.756302  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:54.756309  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:54.756317  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:54.756327  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:54.756339  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:54 GMT
	I0610 14:20:54.756497  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:55.254074  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:55.254095  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:55.254104  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:55.254110  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:55.256348  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:55.256371  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:55.256380  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:55.256389  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:55 GMT
	I0610 14:20:55.256397  108966 round_trippers.go:580]     Audit-Id: d372597a-2982-402e-88f0-51e6b6c65159
	I0610 14:20:55.256405  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:55.256413  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:55.256420  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:55.256600  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:55.256879  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:20:55.754161  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:55.754181  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:55.754188  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:55.754194  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:55.756437  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:55.756457  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:55.756463  108966 round_trippers.go:580]     Audit-Id: 9e788174-ac7a-4488-a09a-3badaeed4ad4
	I0610 14:20:55.756470  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:55.756479  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:55.756487  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:55.756495  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:55.756503  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:55 GMT
	I0610 14:20:55.756597  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:56.254335  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:56.254354  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:56.254362  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:56.254369  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:56.256461  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:56.256486  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:56.256494  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:56 GMT
	I0610 14:20:56.256503  108966 round_trippers.go:580]     Audit-Id: b508eecf-ef6a-4617-9b34-b17605006316
	I0610 14:20:56.256511  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:56.256519  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:56.256527  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:56.256535  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:56.256640  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:56.754372  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:56.754391  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:56.754399  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:56.754406  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:56.756355  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:20:56.756377  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:56.756386  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:56.756395  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:56.756403  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:56.756411  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:56.756420  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:56 GMT
	I0610 14:20:56.756432  108966 round_trippers.go:580]     Audit-Id: c54f8613-ab28-4107-a1c8-a1d8265b62e0
	I0610 14:20:56.756589  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:57.253998  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:57.254019  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:57.254027  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:57.254033  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:57.256299  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:57.256321  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:57.256331  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:57.256340  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:57.256350  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:57.256359  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:57 GMT
	I0610 14:20:57.256372  108966 round_trippers.go:580]     Audit-Id: 81f822eb-16a3-428f-bdbe-20ba7c9e3fad
	I0610 14:20:57.256385  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:57.256497  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:57.753879  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:57.753899  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:57.753910  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:57.753918  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:57.756069  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:57.756089  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:57.756096  108966 round_trippers.go:580]     Audit-Id: bd53b8d4-1966-41f4-a5d5-83f94f11bf4e
	I0610 14:20:57.756102  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:57.756107  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:57.756113  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:57.756118  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:57.756123  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:57 GMT
	I0610 14:20:57.756296  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:57.756698  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:20:58.253811  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:58.253844  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:58.253853  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:58.253859  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:58.256224  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:58.256244  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:58.256253  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:58.256261  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:58 GMT
	I0610 14:20:58.256268  108966 round_trippers.go:580]     Audit-Id: 8d42198a-f07d-4b4f-8e6e-ca0304751965
	I0610 14:20:58.256276  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:58.256284  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:58.256294  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:58.256436  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:58.754123  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:58.754142  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:58.754153  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:58.754161  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:58.756656  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:58.756677  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:58.756684  108966 round_trippers.go:580]     Audit-Id: 737dae49-2a70-467c-8bcc-b2f03fb00372
	I0610 14:20:58.756691  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:58.756700  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:58.756707  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:58.756723  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:58.756732  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:58 GMT
	I0610 14:20:58.756832  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:59.254504  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:59.254532  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:59.254540  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:59.254547  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:59.256697  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:59.256721  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:59.256730  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:59.256738  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:59 GMT
	I0610 14:20:59.256747  108966 round_trippers.go:580]     Audit-Id: 48ebf147-dc0e-4e9a-b7c6-180a950e7183
	I0610 14:20:59.256755  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:59.256770  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:59.256779  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:59.256877  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:59.754491  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:20:59.754520  108966 round_trippers.go:469] Request Headers:
	I0610 14:20:59.754538  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:20:59.754547  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:20:59.756975  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:20:59.756995  108966 round_trippers.go:577] Response Headers:
	I0610 14:20:59.757001  108966 round_trippers.go:580]     Audit-Id: c3d90d2d-33e3-4986-8347-c985b2a52937
	I0610 14:20:59.757007  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:20:59.757015  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:20:59.757024  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:20:59.757032  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:20:59.757040  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:20:59 GMT
	I0610 14:20:59.757139  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:20:59.757426  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:00.254784  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:00.254811  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:00.254819  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:00.254825  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:00.257366  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:00.257387  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:00.257400  108966 round_trippers.go:580]     Audit-Id: a29d311a-6e37-437f-bcc3-f74112a8d163
	I0610 14:21:00.257408  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:00.257416  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:00.257425  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:00.257437  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:00.257445  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:00 GMT
	I0610 14:21:00.257540  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:00.754112  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:00.754131  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:00.754139  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:00.754145  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:00.756272  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:00.756294  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:00.756304  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:00.756313  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:00 GMT
	I0610 14:21:00.756322  108966 round_trippers.go:580]     Audit-Id: 8a4032c2-3ab3-4558-888c-8b58d10b7238
	I0610 14:21:00.756331  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:00.756339  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:00.756348  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:00.756477  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:01.254415  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:01.254435  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:01.254443  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:01.254451  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:01.256623  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:01.256640  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:01.256649  108966 round_trippers.go:580]     Audit-Id: 8429688b-437a-40ac-9160-ded8aa647e3a
	I0610 14:21:01.256655  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:01.256660  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:01.256665  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:01.256671  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:01.256677  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:01 GMT
	I0610 14:21:01.256864  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:01.754551  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:01.754574  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:01.754586  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:01.754595  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:01.756694  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:01.756713  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:01.756720  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:01.756726  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:01 GMT
	I0610 14:21:01.756732  108966 round_trippers.go:580]     Audit-Id: e09dd6ce-2fee-443a-91e2-067de4eab945
	I0610 14:21:01.756737  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:01.756742  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:01.756748  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:01.756842  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:02.254181  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:02.254216  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:02.254224  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:02.254230  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:02.256412  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:02.256432  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:02.256439  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:02.256445  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:02.256450  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:02.256456  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:02.256461  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:02 GMT
	I0610 14:21:02.256466  108966 round_trippers.go:580]     Audit-Id: 551ff20d-f052-476c-bc61-0ce4dac3a2c5
	I0610 14:21:02.256580  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:02.256942  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:02.754108  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:02.754127  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:02.754135  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:02.754148  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:02.756338  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:02.756360  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:02.756371  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:02.756380  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:02.756390  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:02.756399  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:02 GMT
	I0610 14:21:02.756411  108966 round_trippers.go:580]     Audit-Id: 6841f392-7b52-4826-9fc5-c151d8fee646
	I0610 14:21:02.756423  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:02.756526  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:03.254087  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:03.254107  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:03.254116  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:03.254122  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:03.256522  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:03.256545  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:03.256554  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:03.256560  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:03.256567  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:03.256576  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:03.256586  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:03 GMT
	I0610 14:21:03.256595  108966 round_trippers.go:580]     Audit-Id: b90e0212-7c3f-4540-8370-6b1644f5b8f9
	I0610 14:21:03.256728  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:03.754277  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:03.754298  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:03.754306  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:03.754312  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:03.756502  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:03.756525  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:03.756535  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:03.756543  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:03.756551  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:03.756559  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:03.756567  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:03 GMT
	I0610 14:21:03.756583  108966 round_trippers.go:580]     Audit-Id: bfc978c1-df4b-4a47-a2fc-01c3afecc4d7
	I0610 14:21:03.756685  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:04.254524  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:04.254544  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:04.254555  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:04.254564  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:04.256862  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:04.256884  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:04.256891  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:04.256896  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:04 GMT
	I0610 14:21:04.256901  108966 round_trippers.go:580]     Audit-Id: 61b714cf-a164-452b-adc2-0b9ab7f16d20
	I0610 14:21:04.256906  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:04.256911  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:04.256917  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:04.257074  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:04.257377  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:04.754740  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:04.754763  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:04.754774  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:04.754781  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:04.756960  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:04.756978  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:04.756985  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:04.756990  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:04.756996  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:04 GMT
	I0610 14:21:04.757001  108966 round_trippers.go:580]     Audit-Id: 4a953e07-94f9-4f04-bde2-248ab9e0541d
	I0610 14:21:04.757007  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:04.757012  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:04.757109  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:05.254343  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:05.254369  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:05.254380  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:05.254391  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:05.256634  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:05.256653  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:05.256663  108966 round_trippers.go:580]     Audit-Id: 71df9f15-c36d-494b-8142-9ba5cb649cb8
	I0610 14:21:05.256672  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:05.256680  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:05.256687  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:05.256695  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:05.256707  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:05 GMT
	I0610 14:21:05.256815  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:05.754492  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:05.754512  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:05.754520  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:05.754526  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:05.756996  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:05.757014  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:05.757020  108966 round_trippers.go:580]     Audit-Id: ed679c73-d015-4e40-9a83-c070c3613a53
	I0610 14:21:05.757026  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:05.757031  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:05.757036  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:05.757041  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:05.757049  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:05 GMT
	I0610 14:21:05.757149  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:06.254655  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:06.254680  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:06.254692  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:06.254700  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:06.256913  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:06.256931  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:06.256937  108966 round_trippers.go:580]     Audit-Id: bbfdb164-f5a5-4cbd-a752-23f94a540acf
	I0610 14:21:06.256943  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:06.256948  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:06.256953  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:06.256958  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:06.256966  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:06 GMT
	I0610 14:21:06.257121  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:06.257464  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:06.754808  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:06.754828  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:06.754836  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:06.754842  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:06.757095  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:06.757113  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:06.757130  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:06.757138  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:06.757146  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:06 GMT
	I0610 14:21:06.757154  108966 round_trippers.go:580]     Audit-Id: eff576e0-9a32-41aa-ae60-5d9988078f92
	I0610 14:21:06.757166  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:06.757176  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:06.757269  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:07.253847  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:07.253869  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:07.253877  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:07.253883  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:07.256188  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:07.256209  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:07.256219  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:07.256231  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:07 GMT
	I0610 14:21:07.256240  108966 round_trippers.go:580]     Audit-Id: 8664488a-cd02-4c89-a5a2-fd633642a1c3
	I0610 14:21:07.256249  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:07.256259  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:07.256264  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:07.256383  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:07.753850  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:07.753875  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:07.753884  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:07.753890  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:07.756252  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:07.756272  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:07.756279  108966 round_trippers.go:580]     Audit-Id: a714cfea-4118-40e1-b7d7-191b365884bc
	I0610 14:21:07.756285  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:07.756291  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:07.756296  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:07.756310  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:07.756315  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:07 GMT
	I0610 14:21:07.756439  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:08.254028  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:08.254046  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:08.254054  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:08.254060  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:08.256245  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:08.256265  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:08.256275  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:08.256283  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:08.256290  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:08.256298  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:08.256307  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:08 GMT
	I0610 14:21:08.256319  108966 round_trippers.go:580]     Audit-Id: a9701fb8-df1b-43c9-b81e-c93be2790248
	I0610 14:21:08.256438  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:08.753857  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:08.753876  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:08.753886  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:08.753894  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:08.755927  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:08.755948  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:08.755958  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:08.755966  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:08.755977  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:08.755988  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:08 GMT
	I0610 14:21:08.756000  108966 round_trippers.go:580]     Audit-Id: bc4a9024-1346-4696-990a-32beda1a0eac
	I0610 14:21:08.756009  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:08.756127  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:08.756424  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:09.254748  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:09.254767  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:09.254775  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:09.254781  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:09.257026  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:09.257050  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:09.257060  108966 round_trippers.go:580]     Audit-Id: 58b6ae8e-763d-4dd4-966f-1890570e5e27
	I0610 14:21:09.257070  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:09.257079  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:09.257091  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:09.257099  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:09.257111  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:09 GMT
	I0610 14:21:09.257242  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:09.753779  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:09.753799  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:09.753807  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:09.753813  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:09.756059  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:09.756087  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:09.756099  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:09.756108  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:09.756117  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:09 GMT
	I0610 14:21:09.756129  108966 round_trippers.go:580]     Audit-Id: d4f38ec6-afd3-40e1-906a-70afae806711
	I0610 14:21:09.756142  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:09.756155  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:09.756273  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:10.253768  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:10.253786  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:10.253795  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:10.253805  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:10.256150  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:10.256171  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:10.256181  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:10 GMT
	I0610 14:21:10.256190  108966 round_trippers.go:580]     Audit-Id: f5a6d862-bb48-456e-9b88-04e153b19638
	I0610 14:21:10.256197  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:10.256209  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:10.256221  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:10.256229  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:10.256326  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:10.753887  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:10.753931  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:10.753939  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:10.753946  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:10.756219  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:10.756239  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:10.756246  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:10.756252  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:10.756261  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:10.756274  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:10.756286  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:10 GMT
	I0610 14:21:10.756295  108966 round_trippers.go:580]     Audit-Id: 98227a86-9d73-4b18-9ab5-de3cec053e94
	I0610 14:21:10.756416  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:10.756691  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:11.254340  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:11.254358  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:11.254368  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:11.254374  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:11.256443  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:11.256457  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:11.256463  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:11.256468  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:11.256474  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:11 GMT
	I0610 14:21:11.256479  108966 round_trippers.go:580]     Audit-Id: 62a1e3bb-043d-453e-bfb9-a609f169b2af
	I0610 14:21:11.256488  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:11.256497  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:11.256612  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:11.754170  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:11.754190  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:11.754213  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:11.754225  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:11.756433  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:11.756450  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:11.756457  108966 round_trippers.go:580]     Audit-Id: 28d006a3-84c4-4d28-a83f-b67fb8165cbd
	I0610 14:21:11.756463  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:11.756468  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:11.756476  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:11.756485  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:11.756496  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:11 GMT
	I0610 14:21:11.756610  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:12.254240  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:12.254260  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:12.254268  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:12.254274  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:12.256557  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:12.256575  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:12.256581  108966 round_trippers.go:580]     Audit-Id: 01ca12c7-d261-4e52-9dad-8eaa2340d654
	I0610 14:21:12.256587  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:12.256592  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:12.256597  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:12.256603  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:12.256608  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:12 GMT
	I0610 14:21:12.256699  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:12.754411  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:12.754432  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:12.754440  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:12.754451  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:12.756920  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:12.756936  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:12.756942  108966 round_trippers.go:580]     Audit-Id: f690a71b-bf43-4e90-ba0e-1f6282a5827f
	I0610 14:21:12.756948  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:12.756953  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:12.756958  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:12.756965  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:12.756973  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:12 GMT
	I0610 14:21:12.757117  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:12.757436  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:13.254707  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:13.254727  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:13.254734  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:13.254740  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:13.256943  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:13.256968  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:13.256977  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:13.256986  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:13.256995  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:13 GMT
	I0610 14:21:13.257005  108966 round_trippers.go:580]     Audit-Id: 1304712a-6c4e-4d31-a40c-02fc6b310184
	I0610 14:21:13.257012  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:13.257022  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:13.257135  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:13.754770  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:13.754795  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:13.754803  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:13.754809  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:13.757032  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:13.757053  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:13.757060  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:13.757066  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:13.757071  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:13.757077  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:13.757082  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:13 GMT
	I0610 14:21:13.757087  108966 round_trippers.go:580]     Audit-Id: a525d289-c9d8-439a-96b3-71d4e054d0d2
	I0610 14:21:13.757240  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:14.253950  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:14.253969  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:14.253976  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:14.253983  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:14.256186  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:14.256205  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:14.256214  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:14 GMT
	I0610 14:21:14.256223  108966 round_trippers.go:580]     Audit-Id: 5a11e3ff-b517-4162-b04e-33235993abab
	I0610 14:21:14.256230  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:14.256238  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:14.256250  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:14.256262  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:14.256383  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:14.753911  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:14.753929  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:14.753937  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:14.753943  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:14.756211  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:14.756230  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:14.756238  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:14.756243  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:14.756249  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:14.756256  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:14.756265  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:14 GMT
	I0610 14:21:14.756274  108966 round_trippers.go:580]     Audit-Id: 5a7d2b77-c5cc-49d9-8935-98d668de52e3
	I0610 14:21:14.756403  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:15.253983  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:15.254003  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:15.254011  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:15.254017  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:15.256380  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:15.256403  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:15.256413  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:15.256422  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:15 GMT
	I0610 14:21:15.256430  108966 round_trippers.go:580]     Audit-Id: 3c1030b6-7041-4b8c-b705-c12f31704ca5
	I0610 14:21:15.256441  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:15.256457  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:15.256466  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:15.256586  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:15.256965  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:15.754139  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:15.754158  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:15.754166  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:15.754174  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:15.756281  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:15.756302  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:15.756309  108966 round_trippers.go:580]     Audit-Id: 3859302d-5715-4d9c-a77d-76991f27afb0
	I0610 14:21:15.756315  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:15.756320  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:15.756325  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:15.756331  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:15.756336  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:15 GMT
	I0610 14:21:15.756442  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:16.254292  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:16.254312  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:16.254323  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:16.254334  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:16.256638  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:16.256660  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:16.256666  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:16.256672  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:16.256677  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:16.256684  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:16 GMT
	I0610 14:21:16.256692  108966 round_trippers.go:580]     Audit-Id: 36cfd668-f03d-4d57-aa68-aaf12ae266ea
	I0610 14:21:16.256703  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:16.256847  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:16.754456  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:16.754474  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:16.754482  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:16.754489  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:16.756576  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:16.756599  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:16.756610  108966 round_trippers.go:580]     Audit-Id: f72c9d49-0306-423e-a4a1-a033a06d366f
	I0610 14:21:16.756618  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:16.756629  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:16.756638  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:16.756648  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:16.756665  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:16 GMT
	I0610 14:21:16.756771  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:17.254434  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:17.254453  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:17.254462  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:17.254469  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:17.256877  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:17.256893  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:17.256901  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:17.256907  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:17 GMT
	I0610 14:21:17.256914  108966 round_trippers.go:580]     Audit-Id: 000513e3-99c8-4ef5-b184-e199f8b567d7
	I0610 14:21:17.256923  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:17.256936  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:17.256952  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:17.257093  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:17.257382  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:17.754794  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:17.754819  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:17.754831  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:17.754840  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:17.757329  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:17.757345  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:17.757354  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:17.757359  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:17.757365  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:17.757370  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:17 GMT
	I0610 14:21:17.757376  108966 round_trippers.go:580]     Audit-Id: 6b2a7ff6-e6ba-4a1b-ad61-cf03bd3dc750
	I0610 14:21:17.757381  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:17.757546  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:18.254303  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:18.254331  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:18.254344  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:18.254354  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:18.256897  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:18.256913  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:18.256920  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:18.256925  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:18.256931  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:18.256937  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:18.256942  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:18 GMT
	I0610 14:21:18.256948  108966 round_trippers.go:580]     Audit-Id: 5a8baba6-7536-4eda-88aa-fd82d2707306
	I0610 14:21:18.257102  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:18.754743  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:18.754764  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:18.754772  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:18.754778  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:18.757017  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:18.757041  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:18.757051  108966 round_trippers.go:580]     Audit-Id: 73ad4271-d032-4b1e-93f2-2174cbd4c5b2
	I0610 14:21:18.757057  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:18.757065  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:18.757076  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:18.757085  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:18.757095  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:18 GMT
	I0610 14:21:18.757190  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:19.253777  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:19.253806  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:19.253814  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:19.253821  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:19.255997  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:19.256018  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:19.256027  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:19 GMT
	I0610 14:21:19.256036  108966 round_trippers.go:580]     Audit-Id: 53835654-934f-4fcf-8581-87396907c17b
	I0610 14:21:19.256043  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:19.256051  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:19.256059  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:19.256067  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:19.256205  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:19.753836  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:19.753869  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:19.753878  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:19.753884  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:19.756068  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:19.756090  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:19.756100  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:19.756109  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:19.756118  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:19 GMT
	I0610 14:21:19.756128  108966 round_trippers.go:580]     Audit-Id: fc0b9b12-7ced-4afb-be6e-a642165c9125
	I0610 14:21:19.756137  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:19.756145  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:19.756243  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:19.756537  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:20.253832  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:20.253856  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:20.253864  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:20.253869  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:20.256200  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:20.256223  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:20.256232  108966 round_trippers.go:580]     Audit-Id: d8e8e252-aa80-4ed9-be45-21fff317a132
	I0610 14:21:20.256240  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:20.256248  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:20.256256  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:20.256274  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:20.256286  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:20 GMT
	I0610 14:21:20.256432  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:20.753953  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:20.753973  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:20.753981  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:20.753987  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:20.756221  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:20.756244  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:20.756254  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:20.756263  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:20 GMT
	I0610 14:21:20.756270  108966 round_trippers.go:580]     Audit-Id: 7332cb40-6530-4c34-b9c3-3bfbf40b7948
	I0610 14:21:20.756279  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:20.756296  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:20.756305  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:20.756418  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:21.254453  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:21.254473  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:21.254481  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:21.254491  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:21.256784  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:21.256802  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:21.256808  108966 round_trippers.go:580]     Audit-Id: 1a817d3d-c974-469c-9a8c-840a28d96383
	I0610 14:21:21.256814  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:21.256819  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:21.256824  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:21.256829  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:21.256835  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:21 GMT
	I0610 14:21:21.256967  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:21.754660  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:21.754680  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:21.754689  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:21.754695  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:21.756959  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:21.756983  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:21.756993  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:21.757002  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:21.757011  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:21 GMT
	I0610 14:21:21.757019  108966 round_trippers.go:580]     Audit-Id: 5b0130a3-9689-436c-96e3-67f553ec15ec
	I0610 14:21:21.757026  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:21.757046  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:21.757117  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:21.757414  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:22.254746  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:22.254767  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:22.254775  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:22.254781  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:22.257001  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:22.257023  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:22.257032  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:22.257041  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:22 GMT
	I0610 14:21:22.257048  108966 round_trippers.go:580]     Audit-Id: 0e807bbf-4a60-4c62-bbdb-56c8c4d91784
	I0610 14:21:22.257055  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:22.257065  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:22.257074  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:22.257179  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:22.754838  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:22.754862  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:22.754874  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:22.754884  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:22.757120  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:22.757141  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:22.757152  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:22.757160  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:22 GMT
	I0610 14:21:22.757167  108966 round_trippers.go:580]     Audit-Id: 7d4966e8-071c-41ad-b714-83154c163917
	I0610 14:21:22.757175  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:22.757183  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:22.757192  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:22.757323  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:23.254684  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:23.254709  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:23.254722  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:23.254732  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:23.258642  108966 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 14:21:23.258666  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:23.258675  108966 round_trippers.go:580]     Audit-Id: 37812e70-3b28-4085-9fd7-2efb0ef5546b
	I0610 14:21:23.258685  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:23.258693  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:23.258698  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:23.258704  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:23.258709  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:23 GMT
	I0610 14:21:23.258848  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:23.754441  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:23.754462  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:23.754469  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:23.754475  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:23.758218  108966 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 14:21:23.758242  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:23.758252  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:23.758262  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:23.758271  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:23.758280  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:23.758293  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:23 GMT
	I0610 14:21:23.758305  108966 round_trippers.go:580]     Audit-Id: 3aee86e8-7408-46a1-aa68-dcb7f666a8d6
	I0610 14:21:23.758414  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:23.758792  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:24.254067  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:24.254091  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:24.254099  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:24.254105  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:24.256379  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:24.256400  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:24.256410  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:24 GMT
	I0610 14:21:24.256418  108966 round_trippers.go:580]     Audit-Id: 0ca81dc3-ba90-4391-8404-b3c3df460f67
	I0610 14:21:24.256426  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:24.256433  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:24.256440  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:24.256447  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:24.256558  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:24.753847  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:24.753868  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:24.753876  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:24.753882  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:24.756146  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:24.756169  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:24.756186  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:24.756194  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:24.756202  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:24.756210  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:24 GMT
	I0610 14:21:24.756218  108966 round_trippers.go:580]     Audit-Id: 95eab851-eb5e-40e9-832d-77416e010006
	I0610 14:21:24.756226  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:24.756337  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:25.254612  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:25.254633  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:25.254641  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:25.254648  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:25.257082  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:25.257103  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:25.257110  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:25 GMT
	I0610 14:21:25.257116  108966 round_trippers.go:580]     Audit-Id: 4d045a14-2ded-4485-9338-b903513d65b0
	I0610 14:21:25.257121  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:25.257126  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:25.257131  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:25.257137  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:25.257265  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:25.753914  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:25.753936  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:25.753947  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:25.753955  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:25.756235  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:25.756257  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:25.756265  108966 round_trippers.go:580]     Audit-Id: bf645ac5-50c4-4a52-a569-46afe93c32f3
	I0610 14:21:25.756275  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:25.756284  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:25.756294  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:25.756304  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:25.756318  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:25 GMT
	I0610 14:21:25.756436  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:26.254221  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:26.254240  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.254248  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.254254  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.256484  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:26.256502  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.256511  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.256520  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.256529  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.256542  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.256554  108966 round_trippers.go:580]     Audit-Id: 2ee6f47f-8011-4dda-866c-8dae33dcee68
	I0610 14:21:26.256562  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.256720  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"506","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0610 14:21:26.257042  108966 node_ready.go:58] node "multinode-007346-m02" has status "Ready":"False"
	I0610 14:21:26.754366  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:26.754383  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.754391  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.754398  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.756559  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:26.756575  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.756582  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.756588  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.756593  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.756598  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.756603  108966 round_trippers.go:580]     Audit-Id: 654ecd0d-fcb6-4861-9886-ea470dbd8741
	I0610 14:21:26.756608  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.756703  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"552","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0610 14:21:26.757017  108966 node_ready.go:49] node "multinode-007346-m02" has status "Ready":"True"
	I0610 14:21:26.757033  108966 node_ready.go:38] duration metric: took 45.00615196s waiting for node "multinode-007346-m02" to be "Ready" ...
	I0610 14:21:26.757042  108966 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 14:21:26.757102  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0610 14:21:26.757112  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.757122  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.757132  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.760424  108966 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 14:21:26.760448  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.760457  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.760463  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.760470  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.760478  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.760489  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.760498  108966 round_trippers.go:580]     Audit-Id: b02f28e0-2849-4ca0-a4ad-8cb82251ca56
	I0610 14:21:26.761953  108966 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"552"},"items":[{"metadata":{"name":"coredns-5d78c9869d-shl5g","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"cd36daa1-b02e-4fe3-a293-11c38f14826b","resourceVersion":"444","creationTimestamp":"2023-06-10T14:19:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"23a81094-3c32-46de-9e16-9015a058b87b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23a81094-3c32-46de-9e16-9015a058b87b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I0610 14:21:26.764534  108966 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-shl5g" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:26.764590  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-shl5g
	I0610 14:21:26.764597  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.764605  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.764613  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.766328  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:21:26.766342  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.766348  108966 round_trippers.go:580]     Audit-Id: 9ce86c3d-0ddb-4237-8675-1435b7698f93
	I0610 14:21:26.766354  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.766359  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.766364  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.766369  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.766374  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.766519  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-shl5g","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"cd36daa1-b02e-4fe3-a293-11c38f14826b","resourceVersion":"444","creationTimestamp":"2023-06-10T14:19:53Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"23a81094-3c32-46de-9e16-9015a058b87b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"23a81094-3c32-46de-9e16-9015a058b87b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0610 14:21:26.767030  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:21:26.767046  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.767057  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.767067  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.768732  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:21:26.768750  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.768759  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.768767  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.768776  108966 round_trippers.go:580]     Audit-Id: ba74ea48-a615-44e4-8c82-278e6a59bcef
	I0610 14:21:26.768784  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.768793  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.768806  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.768966  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:21:26.769243  108966 pod_ready.go:92] pod "coredns-5d78c9869d-shl5g" in "kube-system" namespace has status "Ready":"True"
	I0610 14:21:26.769256  108966 pod_ready.go:81] duration metric: took 4.70495ms waiting for pod "coredns-5d78c9869d-shl5g" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:26.769263  108966 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:26.769300  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-007346
	I0610 14:21:26.769307  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.769314  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.769319  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.770987  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:21:26.771003  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.771010  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.771016  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.771021  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.771027  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.771032  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.771040  108966 round_trippers.go:580]     Audit-Id: 480907c4-fc15-4a40-80ec-be7bf3bea3bb
	I0610 14:21:26.771111  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-007346","namespace":"kube-system","uid":"6420712a-1ac5-4bc1-9126-4744fdf88efb","resourceVersion":"299","creationTimestamp":"2023-06-10T14:19:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"37b6fdeb2b133f7dbaa387ba796c1ab4","kubernetes.io/config.mirror":"37b6fdeb2b133f7dbaa387ba796c1ab4","kubernetes.io/config.seen":"2023-06-10T14:19:39.785675959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0610 14:21:26.771489  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:21:26.771505  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.771513  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.771523  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.773187  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:21:26.773200  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.773206  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.773212  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.773217  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.773223  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.773231  108966 round_trippers.go:580]     Audit-Id: a5a3dfa6-0a7f-4595-aa9e-bb6709e558c2
	I0610 14:21:26.773236  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.773380  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:21:26.773691  108966 pod_ready.go:92] pod "etcd-multinode-007346" in "kube-system" namespace has status "Ready":"True"
	I0610 14:21:26.773707  108966 pod_ready.go:81] duration metric: took 4.437783ms waiting for pod "etcd-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:26.773724  108966 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:26.773776  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-007346
	I0610 14:21:26.773786  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.773796  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.773807  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.775454  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:21:26.775473  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.775479  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.775485  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.775490  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.775495  108966 round_trippers.go:580]     Audit-Id: 7b7aaa68-a584-4a98-a8c8-5cb8ee3bdcac
	I0610 14:21:26.775501  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.775506  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.775663  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-007346","namespace":"kube-system","uid":"dfa6499c-9c79-4d60-b19a-a9777559448d","resourceVersion":"296","creationTimestamp":"2023-06-10T14:19:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"907f8705ef160d439db593cd98924499","kubernetes.io/config.mirror":"907f8705ef160d439db593cd98924499","kubernetes.io/config.seen":"2023-06-10T14:19:39.785680156Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0610 14:21:26.776121  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:21:26.776137  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.776148  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.776158  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.778069  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:21:26.778089  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.778100  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.778108  108966 round_trippers.go:580]     Audit-Id: 67bcc64f-bafe-4b31-ae0e-4d95126a777a
	I0610 14:21:26.778117  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.778122  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.778130  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.778136  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.778251  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:21:26.778588  108966 pod_ready.go:92] pod "kube-apiserver-multinode-007346" in "kube-system" namespace has status "Ready":"True"
	I0610 14:21:26.778604  108966 pod_ready.go:81] duration metric: took 4.869167ms waiting for pod "kube-apiserver-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:26.778613  108966 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:26.778659  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-007346
	I0610 14:21:26.778670  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.778681  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.778690  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.780636  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:21:26.780654  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.780663  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.780671  108966 round_trippers.go:580]     Audit-Id: 358f1298-c8c3-472f-aa27-5799f5197140
	I0610 14:21:26.780679  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.780690  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.780702  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.780710  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.780841  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-007346","namespace":"kube-system","uid":"138c0daf-2ed8-4b72-8bd1-47e4f14030b1","resourceVersion":"293","creationTimestamp":"2023-06-10T14:19:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ca1849216ded5706a7fff56f8b58428f","kubernetes.io/config.mirror":"ca1849216ded5706a7fff56f8b58428f","kubernetes.io/config.seen":"2023-06-10T14:19:39.785681888Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0610 14:21:26.781167  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:21:26.781177  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.781183  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.781189  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.782757  108966 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0610 14:21:26.782772  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.782778  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.782784  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.782789  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.782798  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.782809  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.782820  108966 round_trippers.go:580]     Audit-Id: ecac164c-9a0e-4372-849d-b406387cf57e
	I0610 14:21:26.782927  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:21:26.783192  108966 pod_ready.go:92] pod "kube-controller-manager-multinode-007346" in "kube-system" namespace has status "Ready":"True"
	I0610 14:21:26.783205  108966 pod_ready.go:81] duration metric: took 4.585675ms waiting for pod "kube-controller-manager-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:26.783213  108966 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pswh7" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:26.954547  108966 request.go:628] Waited for 171.276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pswh7
	I0610 14:21:26.954633  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pswh7
	I0610 14:21:26.954641  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:26.954653  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:26.954673  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:26.956915  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:26.956935  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:26.956944  108966 round_trippers.go:580]     Audit-Id: e93775d7-48e6-4f77-8110-2b8db0421eca
	I0610 14:21:26.956954  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:26.956963  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:26.956972  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:26.956980  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:26.956987  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:26 GMT
	I0610 14:21:26.957088  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pswh7","generateName":"kube-proxy-","namespace":"kube-system","uid":"a4e7f056-9b22-442e-a512-a591ec2bff2a","resourceVersion":"404","creationTimestamp":"2023-06-10T14:19:53Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"551ccd1d-3af1-41a9-ad14-2ce1135d55c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"551ccd1d-3af1-41a9-ad14-2ce1135d55c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5508 chars]
	I0610 14:21:27.154949  108966 request.go:628] Waited for 197.432724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:21:27.155009  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:21:27.155016  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:27.155027  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:27.155052  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:27.157462  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:27.157486  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:27.157495  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:27.157504  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:27.157512  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:27.157521  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:27 GMT
	I0610 14:21:27.157536  108966 round_trippers.go:580]     Audit-Id: 0e4c0cb9-06f1-40f4-b5f8-d95531c14b69
	I0610 14:21:27.157557  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:27.157686  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:21:27.157986  108966 pod_ready.go:92] pod "kube-proxy-pswh7" in "kube-system" namespace has status "Ready":"True"
	I0610 14:21:27.158000  108966 pod_ready.go:81] duration metric: took 374.783088ms waiting for pod "kube-proxy-pswh7" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:27.158009  108966 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wcrw2" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:27.355374  108966 request.go:628] Waited for 197.313285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wcrw2
	I0610 14:21:27.355435  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wcrw2
	I0610 14:21:27.355448  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:27.355456  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:27.355462  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:27.357540  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:27.357561  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:27.357571  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:27.357578  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:27.357586  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:27.357593  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:27 GMT
	I0610 14:21:27.357605  108966 round_trippers.go:580]     Audit-Id: 1d41e20c-59a2-417a-bc67-62e10bf60bb2
	I0610 14:21:27.357612  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:27.357741  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wcrw2","generateName":"kube-proxy-","namespace":"kube-system","uid":"fa92ea5d-260e-4797-a140-004925429a34","resourceVersion":"523","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"551ccd1d-3af1-41a9-ad14-2ce1135d55c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"551ccd1d-3af1-41a9-ad14-2ce1135d55c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5516 chars]
	I0610 14:21:27.555376  108966 request.go:628] Waited for 197.250957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:27.555440  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346-m02
	I0610 14:21:27.555449  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:27.555459  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:27.555473  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:27.557503  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:27.557520  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:27.557526  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:27.557532  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:27 GMT
	I0610 14:21:27.557537  108966 round_trippers.go:580]     Audit-Id: 64f61649-a2a3-472b-8b10-88d8b9cedf08
	I0610 14:21:27.557542  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:27.557547  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:27.557552  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:27.557642  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346-m02","uid":"139bd6d9-f2a6-4f3f-ad22-617370df149e","resourceVersion":"552","creationTimestamp":"2023-06-10T14:20:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:20:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0610 14:21:27.557933  108966 pod_ready.go:92] pod "kube-proxy-wcrw2" in "kube-system" namespace has status "Ready":"True"
	I0610 14:21:27.557944  108966 pod_ready.go:81] duration metric: took 399.929385ms waiting for pod "kube-proxy-wcrw2" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:27.557953  108966 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:27.755372  108966 request.go:628] Waited for 197.356174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-007346
	I0610 14:21:27.755445  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-007346
	I0610 14:21:27.755455  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:27.755468  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:27.755482  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:27.757857  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:27.757875  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:27.757882  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:27.757887  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:27.757893  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:27 GMT
	I0610 14:21:27.757898  108966 round_trippers.go:580]     Audit-Id: 92c47738-d958-4f4c-977e-0e631a4b08be
	I0610 14:21:27.757905  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:27.757913  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:27.758029  108966 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-007346","namespace":"kube-system","uid":"572e869a-7b30-452e-9389-24f81d604d9f","resourceVersion":"294","creationTimestamp":"2023-06-10T14:19:39Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b68a00a0437cfef17ee6606fa6c3c05f","kubernetes.io/config.mirror":"b68a00a0437cfef17ee6606fa6c3c05f","kubernetes.io/config.seen":"2023-06-10T14:19:39.785683331Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-10T14:19:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0610 14:21:27.954789  108966 request.go:628] Waited for 196.357856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:21:27.954862  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-007346
	I0610 14:21:27.954870  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:27.954883  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:27.954900  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:27.957152  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:27.957169  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:27.957175  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:27.957183  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:27.957191  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:27.957199  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:27.957207  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:27 GMT
	I0610 14:21:27.957215  108966 round_trippers.go:580]     Audit-Id: 81b270fc-047d-47f8-8e54-3f0e064a476f
	I0610 14:21:27.957324  108966 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-10T14:19:36Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0610 14:21:27.957647  108966 pod_ready.go:92] pod "kube-scheduler-multinode-007346" in "kube-system" namespace has status "Ready":"True"
	I0610 14:21:27.957663  108966 pod_ready.go:81] duration metric: took 399.703662ms waiting for pod "kube-scheduler-multinode-007346" in "kube-system" namespace to be "Ready" ...
	I0610 14:21:27.957675  108966 pod_ready.go:38] duration metric: took 1.200622223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 14:21:27.957694  108966 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 14:21:27.957752  108966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 14:21:27.968062  108966 system_svc.go:56] duration metric: took 10.353163ms WaitForService to wait for kubelet.
	I0610 14:21:27.968086  108966 kubeadm.go:581] duration metric: took 46.232915187s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 14:21:27.968112  108966 node_conditions.go:102] verifying NodePressure condition ...
	I0610 14:21:28.154457  108966 request.go:628] Waited for 186.255505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0610 14:21:28.154502  108966 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0610 14:21:28.154507  108966 round_trippers.go:469] Request Headers:
	I0610 14:21:28.154515  108966 round_trippers.go:473]     Accept: application/json, */*
	I0610 14:21:28.154521  108966 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 14:21:28.156779  108966 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 14:21:28.156803  108966 round_trippers.go:577] Response Headers:
	I0610 14:21:28.156814  108966 round_trippers.go:580]     Audit-Id: 6a3136f7-cb83-46b2-bf59-1b951987ca16
	I0610 14:21:28.156824  108966 round_trippers.go:580]     Cache-Control: no-cache, private
	I0610 14:21:28.156833  108966 round_trippers.go:580]     Content-Type: application/json
	I0610 14:21:28.156842  108966 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 36161f44-d115-4e55-8ad0-02fda88f7179
	I0610 14:21:28.156851  108966 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: af1cc1ae-afa8-490f-8c96-41a0da1d9c72
	I0610 14:21:28.156860  108966 round_trippers.go:580]     Date: Sat, 10 Jun 2023 14:21:28 GMT
	I0610 14:21:28.157053  108966 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"553"},"items":[{"metadata":{"name":"multinode-007346","uid":"5663ff3e-7127-4510-9876-9b8b6537884e","resourceVersion":"425","creationTimestamp":"2023-06-10T14:19:37Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-007346","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3273891fc7fc0f39c65075197baa2d52fc489f6f","minikube.k8s.io/name":"multinode-007346","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_10T14_19_40_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12168 chars]
	I0610 14:21:28.157546  108966 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0610 14:21:28.157561  108966 node_conditions.go:123] node cpu capacity is 8
	I0610 14:21:28.157570  108966 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0610 14:21:28.157574  108966 node_conditions.go:123] node cpu capacity is 8
	I0610 14:21:28.157578  108966 node_conditions.go:105] duration metric: took 189.461501ms to run NodePressure ...
	I0610 14:21:28.157591  108966 start.go:228] waiting for startup goroutines ...
	I0610 14:21:28.157617  108966 start.go:242] writing updated cluster config ...
	I0610 14:21:28.157888  108966 ssh_runner.go:195] Run: rm -f paused
	I0610 14:21:28.202378  108966 start.go:573] kubectl: 1.27.2, cluster: 1.27.2 (minor skew: 0)
	I0610 14:21:28.205437  108966 out.go:177] * Done! kubectl is now configured to use "multinode-007346" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jun 10 14:20:25 multinode-007346 crio[962]: time="2023-06-10 14:20:25.738495972Z" level=info msg="Starting container: f4c4e3380a9e355bc8372bd58583d6e9e9d9edf393663fdfd790e93f1ff786c2" id=f234cb1d-2628-4179-8b7e-84a00163ca76 name=/runtime.v1.RuntimeService/StartContainer
	Jun 10 14:20:25 multinode-007346 crio[962]: time="2023-06-10 14:20:25.742037245Z" level=info msg="Created container 91a15969874902bee8247f0262f583c4a826301e8e00afe5689523da6b8c54e3: kube-system/coredns-5d78c9869d-shl5g/coredns" id=eda2c32e-b465-40f2-b00e-b658fdd3a50c name=/runtime.v1.RuntimeService/CreateContainer
	Jun 10 14:20:25 multinode-007346 crio[962]: time="2023-06-10 14:20:25.742547896Z" level=info msg="Starting container: 91a15969874902bee8247f0262f583c4a826301e8e00afe5689523da6b8c54e3" id=361f43d2-ea3a-47fd-a09b-21240efddbfd name=/runtime.v1.RuntimeService/StartContainer
	Jun 10 14:20:25 multinode-007346 crio[962]: time="2023-06-10 14:20:25.764780425Z" level=info msg="Started container" PID=2355 containerID=f4c4e3380a9e355bc8372bd58583d6e9e9d9edf393663fdfd790e93f1ff786c2 description=kube-system/storage-provisioner/storage-provisioner id=f234cb1d-2628-4179-8b7e-84a00163ca76 name=/runtime.v1.RuntimeService/StartContainer sandboxID=27801746ae51c1db28f9f72b40c7ef1cdfe4af3cab3ea77a8cc09b5bca5e770a
	Jun 10 14:20:25 multinode-007346 crio[962]: time="2023-06-10 14:20:25.768333835Z" level=info msg="Started container" PID=2365 containerID=91a15969874902bee8247f0262f583c4a826301e8e00afe5689523da6b8c54e3 description=kube-system/coredns-5d78c9869d-shl5g/coredns id=361f43d2-ea3a-47fd-a09b-21240efddbfd name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8900a20ddd4df830124c6535734a69c0d3a174cf09391a3149004deea04ff01
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.199457182Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-6nqgr/POD" id=11e44dae-492e-49f3-8e5c-081f211ab4c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.199528615Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.216191707Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-6nqgr Namespace:default ID:da19aeb6218c29472a3c11d227db891ced106b6da8aa92e12b16fbb18f6948b1 UID:c4052d27-810a-409a-bbaf-6a3e05c6cec1 NetNS:/var/run/netns/08280dc8-588c-472c-ab5b-8e1ccb01b55b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.216222445Z" level=info msg="Adding pod default_busybox-67b7f59bb-6nqgr to CNI network \"kindnet\" (type=ptp)"
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.225132119Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-6nqgr Namespace:default ID:da19aeb6218c29472a3c11d227db891ced106b6da8aa92e12b16fbb18f6948b1 UID:c4052d27-810a-409a-bbaf-6a3e05c6cec1 NetNS:/var/run/netns/08280dc8-588c-472c-ab5b-8e1ccb01b55b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.225273717Z" level=info msg="Checking pod default_busybox-67b7f59bb-6nqgr for CNI network kindnet (type=ptp)"
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.248814270Z" level=info msg="Ran pod sandbox da19aeb6218c29472a3c11d227db891ced106b6da8aa92e12b16fbb18f6948b1 with infra container: default/busybox-67b7f59bb-6nqgr/POD" id=11e44dae-492e-49f3-8e5c-081f211ab4c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.249818053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=d139af56-ff14-4697-b4ea-f29edd5d165a name=/runtime.v1.ImageService/ImageStatus
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.250062550Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=d139af56-ff14-4697-b4ea-f29edd5d165a name=/runtime.v1.ImageService/ImageStatus
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.250835318Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=d7bc0a0e-0562-40f7-bddb-d4fd2ea6bb8e name=/runtime.v1.ImageService/PullImage
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.262759912Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jun 10 14:21:29 multinode-007346 crio[962]: time="2023-06-10 14:21:29.514795959Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jun 10 14:21:30 multinode-007346 crio[962]: time="2023-06-10 14:21:30.069126753Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=d7bc0a0e-0562-40f7-bddb-d4fd2ea6bb8e name=/runtime.v1.ImageService/PullImage
	Jun 10 14:21:30 multinode-007346 crio[962]: time="2023-06-10 14:21:30.069980744Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=94c1d0e3-1785-445a-baee-d53510d777a8 name=/runtime.v1.ImageService/ImageStatus
	Jun 10 14:21:30 multinode-007346 crio[962]: time="2023-06-10 14:21:30.070807972Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=94c1d0e3-1785-445a-baee-d53510d777a8 name=/runtime.v1.ImageService/ImageStatus
	Jun 10 14:21:30 multinode-007346 crio[962]: time="2023-06-10 14:21:30.071615607Z" level=info msg="Creating container: default/busybox-67b7f59bb-6nqgr/busybox" id=0a6773e4-c97e-4f18-9c8d-67c65f4d81b5 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 10 14:21:30 multinode-007346 crio[962]: time="2023-06-10 14:21:30.071711959Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 10 14:21:30 multinode-007346 crio[962]: time="2023-06-10 14:21:30.122087058Z" level=info msg="Created container 5b468538467238525589a20e61229e7f4e3af5b6f6f98cb0bb191d1277d46675: default/busybox-67b7f59bb-6nqgr/busybox" id=0a6773e4-c97e-4f18-9c8d-67c65f4d81b5 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 10 14:21:30 multinode-007346 crio[962]: time="2023-06-10 14:21:30.122747928Z" level=info msg="Starting container: 5b468538467238525589a20e61229e7f4e3af5b6f6f98cb0bb191d1277d46675" id=fee4acfa-ffdc-4cb0-84f7-c0bdab492151 name=/runtime.v1.RuntimeService/StartContainer
	Jun 10 14:21:30 multinode-007346 crio[962]: time="2023-06-10 14:21:30.130690397Z" level=info msg="Started container" PID=2539 containerID=5b468538467238525589a20e61229e7f4e3af5b6f6f98cb0bb191d1277d46675 description=default/busybox-67b7f59bb-6nqgr/busybox id=fee4acfa-ffdc-4cb0-84f7-c0bdab492151 name=/runtime.v1.RuntimeService/StartContainer sandboxID=da19aeb6218c29472a3c11d227db891ced106b6da8aa92e12b16fbb18f6948b1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5b46853846723       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   da19aeb6218c2       busybox-67b7f59bb-6nqgr
	91a1596987490       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   0                   d8900a20ddd4d       coredns-5d78c9869d-shl5g
	f4c4e3380a9e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   27801746ae51c       storage-provisioner
	8fd31782daf9b       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      About a minute ago   Running             kindnet-cni               0                   6ddfbed20c88b       kindnet-tsnlt
	b03b5c3c916c3       b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee                                      About a minute ago   Running             kube-proxy                0                   a162fcc3fe08b       kube-proxy-pswh7
	6cc0540979c3a       c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370                                      About a minute ago   Running             kube-apiserver            0                   c10a84ae7c317       kube-apiserver-multinode-007346
	0bc3078f9b92d       ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12                                      About a minute ago   Running             kube-controller-manager   0                   53efffc684991       kube-controller-manager-multinode-007346
	bdadc8b23ba85       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      2 minutes ago        Running             etcd                      0                   1c385213592e7       etcd-multinode-007346
	48b19703df8fb       89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0                                      2 minutes ago        Running             kube-scheduler            0                   2bba16d3ae310       kube-scheduler-multinode-007346
	
	* 
	* ==> coredns [91a15969874902bee8247f0262f583c4a826301e8e00afe5689523da6b8c54e3] <==
	* [INFO] 10.244.0.3:53142 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067105s
	[INFO] 10.244.1.2:54678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013971s
	[INFO] 10.244.1.2:49542 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001857562s
	[INFO] 10.244.1.2:53002 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083758s
	[INFO] 10.244.1.2:42066 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085111s
	[INFO] 10.244.1.2:38634 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001304538s
	[INFO] 10.244.1.2:60740 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006712s
	[INFO] 10.244.1.2:51219 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066179s
	[INFO] 10.244.1.2:57674 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042591s
	[INFO] 10.244.0.3:34431 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109271s
	[INFO] 10.244.0.3:55786 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069232s
	[INFO] 10.244.0.3:60122 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081604s
	[INFO] 10.244.0.3:42108 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079195s
	[INFO] 10.244.1.2:33212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099525s
	[INFO] 10.244.1.2:43489 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091916s
	[INFO] 10.244.1.2:41318 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061567s
	[INFO] 10.244.1.2:58750 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047857s
	[INFO] 10.244.0.3:35838 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092334s
	[INFO] 10.244.0.3:54341 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000115954s
	[INFO] 10.244.0.3:41809 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082112s
	[INFO] 10.244.0.3:60037 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082898s
	[INFO] 10.244.1.2:34346 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012012s
	[INFO] 10.244.1.2:49038 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093143s
	[INFO] 10.244.1.2:60344 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074898s
	[INFO] 10.244.1.2:41525 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000063902s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-007346
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-007346
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3273891fc7fc0f39c65075197baa2d52fc489f6f
	                    minikube.k8s.io/name=multinode-007346
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T14_19_40_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 14:19:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-007346
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 14:21:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 14:20:25 +0000   Sat, 10 Jun 2023 14:19:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 14:20:25 +0000   Sat, 10 Jun 2023 14:19:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 14:20:25 +0000   Sat, 10 Jun 2023 14:19:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 14:20:25 +0000   Sat, 10 Jun 2023 14:20:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-007346
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	System Info:
	  Machine ID:                 86e7b7decfcb402dbb1808e53089aaff
	  System UUID:                3d0dd1ac-1e71-4933-9084-a361284771d9
	  Boot ID:                    e810f687-8f99-49aa-a9be-3ee9974bdd8c
	  Kernel Version:             5.15.0-1035-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-6nqgr                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5d78c9869d-shl5g                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     101s
	  kube-system                 etcd-multinode-007346                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         115s
	  kube-system                 kindnet-tsnlt                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      101s
	  kube-system                 kube-apiserver-multinode-007346             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-controller-manager-multinode-007346    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-proxy-pswh7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-scheduler-multinode-007346             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node multinode-007346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node multinode-007346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x8 over 2m1s)  kubelet          Node multinode-007346 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node multinode-007346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node multinode-007346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node multinode-007346 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           102s                 node-controller  Node multinode-007346 event: Registered Node multinode-007346 in Controller
	  Normal  NodeReady                69s                  kubelet          Node multinode-007346 status is now: NodeReady
	
	
	Name:               multinode-007346-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-007346-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 14:20:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-007346-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 14:21:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 14:21:26 +0000   Sat, 10 Jun 2023 14:20:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 14:21:26 +0000   Sat, 10 Jun 2023 14:20:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 14:21:26 +0000   Sat, 10 Jun 2023 14:20:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 14:21:26 +0000   Sat, 10 Jun 2023 14:21:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-007346-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32871728Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f0dd43dcd5b4de6a3b255641fd3497a
	  System UUID:                5f2e6444-3020-4e12-b53a-49606e83c29b
	  Boot ID:                    e810f687-8f99-49aa-a9be-3ee9974bdd8c
	  Kernel Version:             5.15.0-1035-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.5
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-r6l8p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-vlws5              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      53s
	  kube-system                 kube-proxy-wcrw2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 36s                kube-proxy       
	  Normal  NodeHasSufficientMemory  53s (x5 over 55s)  kubelet          Node multinode-007346-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x5 over 55s)  kubelet          Node multinode-007346-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x5 over 55s)  kubelet          Node multinode-007346-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node multinode-007346-m02 event: Registered Node multinode-007346-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-007346-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004913] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006589] FS-Cache: N-cookie d=00000000cd7bd88f{9p.inode} n=00000000111740fd
	[  +0.007346] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.303680] FS-Cache: Duplicate cookie detected
	[  +0.004786] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006739] FS-Cache: O-cookie d=00000000cd7bd88f{9p.inode} n=000000001b29a883
	[  +0.007358] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004928] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006593] FS-Cache: N-cookie d=00000000cd7bd88f{9p.inode} n=000000000c71919a
	[  +0.008749] FS-Cache: N-key=[8] '0690130200000000'
	[  +1.834498] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jun10 14:11] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[  +1.000421] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[  +2.015794] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[Jun10 14:12] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[  +8.191118] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[ +16.126260] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	[ +33.020471] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 56 58 ee e5 62 11 0e 4f 87 d3 0e 52 08 00
	
	* 
	* ==> etcd [bdadc8b23ba85ad6e9bbb72d913550cb6e54af8d959a6c1abd1944e8656d95ae] <==
	* {"level":"info","ts":"2023-06-10T14:19:34.568Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-10T14:19:34.568Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-10T14:19:34.568Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-06-10T14:19:34.568Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-06-10T14:19:34.569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-06-10T14:19:34.569Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-06-10T14:19:34.690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-10T14:19:34.690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-10T14:19:34.690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-06-10T14:19:34.690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-06-10T14:19:34.690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-06-10T14:19:34.690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-06-10T14:19:34.690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-06-10T14:19:34.691Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:19:34.692Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T14:19:34.692Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T14:19:34.692Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-007346 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T14:19:34.693Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:19:34.693Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:19:34.693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T14:19:34.693Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:19:34.693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T14:19:34.694Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-06-10T14:19:34.694Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T14:20:31.960Z","caller":"traceutil/trace.go:171","msg":"trace[2013702682] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"140.883437ms","start":"2023-06-10T14:20:31.819Z","end":"2023-06-10T14:20:31.960Z","steps":["trace[2013702682] 'process raft request'  (duration: 140.765828ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  14:21:34 up  2:04,  0 users,  load average: 0.51, 1.06, 0.92
	Linux multinode-007346 5.15.0-1035-gcp #43~20.04.1-Ubuntu SMP Mon May 22 16:49:11 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [8fd31782daf9b58b9953b1b7c590505d61ed3330b370e6e3648a3edd1fc63777] <==
	* I0610 14:20:24.898176       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0610 14:20:24.898225       1 main.go:227] handling current node
	I0610 14:20:34.913170       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0610 14:20:34.913192       1 main.go:227] handling current node
	I0610 14:20:44.925718       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0610 14:20:44.925741       1 main.go:227] handling current node
	I0610 14:20:44.925749       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0610 14:20:44.925754       1 main.go:250] Node multinode-007346-m02 has CIDR [10.244.1.0/24] 
	I0610 14:20:44.925906       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0610 14:20:54.930069       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0610 14:20:54.930093       1 main.go:227] handling current node
	I0610 14:20:54.930102       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0610 14:20:54.930106       1 main.go:250] Node multinode-007346-m02 has CIDR [10.244.1.0/24] 
	I0610 14:21:04.938350       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0610 14:21:04.938372       1 main.go:227] handling current node
	I0610 14:21:04.938381       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0610 14:21:04.938387       1 main.go:250] Node multinode-007346-m02 has CIDR [10.244.1.0/24] 
	I0610 14:21:14.943160       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0610 14:21:14.943183       1 main.go:227] handling current node
	I0610 14:21:14.943192       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0610 14:21:14.943196       1 main.go:250] Node multinode-007346-m02 has CIDR [10.244.1.0/24] 
	I0610 14:21:24.949974       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0610 14:21:24.950112       1 main.go:227] handling current node
	I0610 14:21:24.950185       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0610 14:21:24.950278       1 main.go:250] Node multinode-007346-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [6cc0540979c3a741078a04645dc8a28174c1c655ae448667f247327e4fa97d1a] <==
	* I0610 14:19:37.060092       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0610 14:19:37.060162       1 cache.go:39] Caches are synced for autoregister controller
	I0610 14:19:37.060205       1 shared_informer.go:318] Caches are synced for configmaps
	I0610 14:19:37.060741       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 14:19:37.060765       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 14:19:37.061660       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0610 14:19:37.062005       1 controller.go:624] quota admission added evaluator for: namespaces
	I0610 14:19:37.066308       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0610 14:19:37.072856       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 14:19:37.712034       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 14:19:37.935803       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0610 14:19:37.939232       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 14:19:37.939251       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 14:19:38.307481       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 14:19:38.337504       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 14:19:38.386654       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0610 14:19:38.393712       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0610 14:19:38.394536       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 14:19:38.397877       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 14:19:38.996143       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 14:19:39.735869       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 14:19:39.744701       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0610 14:19:39.753431       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0610 14:19:53.525683       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0610 14:19:53.675840       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [0bc3078f9b92d65ff5efdc136dbfd96ec186679c93911055d3bf4caa8009788c] <==
	* I0610 14:19:52.795174       1 node_lifecycle_controller.go:1027] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0610 14:19:52.824771       1 shared_informer.go:318] Caches are synced for persistent volume
	I0610 14:19:52.928666       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 14:19:52.977192       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 14:19:53.290558       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 14:19:53.322079       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 14:19:53.322111       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0610 14:19:53.529445       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0610 14:19:53.682891       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pswh7"
	I0610 14:19:53.684364       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tsnlt"
	I0610 14:19:53.783715       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-dc9dr"
	I0610 14:19:53.789104       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-shl5g"
	I0610 14:19:53.873313       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0610 14:19:53.887415       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-dc9dr"
	I0610 14:20:27.800634       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0610 14:20:41.222252       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-007346-m02\" does not exist"
	I0610 14:20:41.228589       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-007346-m02" podCIDRs=[10.244.1.0/24]
	I0610 14:20:41.235074       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wcrw2"
	I0610 14:20:41.236183       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vlws5"
	I0610 14:20:42.803619       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-007346-m02"
	I0610 14:20:42.803630       1 event.go:307] "Event occurred" object="multinode-007346-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-007346-m02 event: Registered Node multinode-007346-m02 in Controller"
	W0610 14:21:26.521648       1 topologycache.go:232] Can't get CPU or zone information for multinode-007346-m02 node
	I0610 14:21:28.879969       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0610 14:21:28.887530       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-r6l8p"
	I0610 14:21:28.892219       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-6nqgr"
	
	* 
	* ==> kube-proxy [b03b5c3c916c394293a505ddeec53f893104152d400d31a0008a83d120db5d49] <==
	* I0610 14:19:54.661028       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0610 14:19:54.661122       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0610 14:19:54.661154       1 server_others.go:551] "Using iptables proxy"
	I0610 14:19:54.681517       1 server_others.go:190] "Using iptables Proxier"
	I0610 14:19:54.681556       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0610 14:19:54.681569       1 server_others.go:198] "Creating dualStackProxier for iptables"
	I0610 14:19:54.681585       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0610 14:19:54.681620       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 14:19:54.682700       1 server.go:657] "Version info" version="v1.27.2"
	I0610 14:19:54.682723       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 14:19:54.686636       1 config.go:97] "Starting endpoint slice config controller"
	I0610 14:19:54.686725       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 14:19:54.686813       1 config.go:188] "Starting service config controller"
	I0610 14:19:54.686872       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 14:19:54.688897       1 config.go:315] "Starting node config controller"
	I0610 14:19:54.688971       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 14:19:54.787008       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 14:19:54.787007       1 shared_informer.go:318] Caches are synced for service config
	I0610 14:19:54.789624       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [48b19703df8fbac078b310d1735664f59704bb90a46ff9a1a4f08bf7d350a269] <==
	* E0610 14:19:37.061409       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 14:19:37.061145       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 14:19:37.061448       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 14:19:37.061544       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 14:19:37.061112       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 14:19:37.061613       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 14:19:37.061183       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 14:19:37.061679       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 14:19:37.864521       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 14:19:37.864553       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 14:19:37.889097       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 14:19:37.889124       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 14:19:37.984950       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 14:19:37.984988       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 14:19:38.008184       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 14:19:38.008211       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 14:19:38.058538       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 14:19:38.058572       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 14:19:38.097981       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 14:19:38.098010       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 14:19:38.147269       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 14:19:38.147301       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 14:19:38.163507       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 14:19:38.163537       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 14:19:39.982950       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 10 14:19:53 multinode-007346 kubelet[1592]: I0610 14:19:53.773896    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4e7f056-9b22-442e-a512-a591ec2bff2a-lib-modules\") pod \"kube-proxy-pswh7\" (UID: \"a4e7f056-9b22-442e-a512-a591ec2bff2a\") " pod="kube-system/kube-proxy-pswh7"
	Jun 10 14:19:53 multinode-007346 kubelet[1592]: I0610 14:19:53.774016    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5m9t\" (UniqueName: \"kubernetes.io/projected/a4e7f056-9b22-442e-a512-a591ec2bff2a-kube-api-access-p5m9t\") pod \"kube-proxy-pswh7\" (UID: \"a4e7f056-9b22-442e-a512-a591ec2bff2a\") " pod="kube-system/kube-proxy-pswh7"
	Jun 10 14:19:53 multinode-007346 kubelet[1592]: I0610 14:19:53.774078    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/79e2addf-dc39-401b-a53a-a31493f50015-cni-cfg\") pod \"kindnet-tsnlt\" (UID: \"79e2addf-dc39-401b-a53a-a31493f50015\") " pod="kube-system/kindnet-tsnlt"
	Jun 10 14:19:53 multinode-007346 kubelet[1592]: I0610 14:19:53.774125    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79e2addf-dc39-401b-a53a-a31493f50015-xtables-lock\") pod \"kindnet-tsnlt\" (UID: \"79e2addf-dc39-401b-a53a-a31493f50015\") " pod="kube-system/kindnet-tsnlt"
	Jun 10 14:19:53 multinode-007346 kubelet[1592]: I0610 14:19:53.774180    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79e2addf-dc39-401b-a53a-a31493f50015-lib-modules\") pod \"kindnet-tsnlt\" (UID: \"79e2addf-dc39-401b-a53a-a31493f50015\") " pod="kube-system/kindnet-tsnlt"
	Jun 10 14:19:53 multinode-007346 kubelet[1592]: I0610 14:19:53.774236    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvkbw\" (UniqueName: \"kubernetes.io/projected/79e2addf-dc39-401b-a53a-a31493f50015-kube-api-access-zvkbw\") pod \"kindnet-tsnlt\" (UID: \"79e2addf-dc39-401b-a53a-a31493f50015\") " pod="kube-system/kindnet-tsnlt"
	Jun 10 14:19:54 multinode-007346 kubelet[1592]: W0610 14:19:54.060794    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd/crio/crio-a162fcc3fe08b5c4811f9e12c686bd48c8ef55ebf86b423371dd3a50f10153e8 WatchSource:0}: Error finding container a162fcc3fe08b5c4811f9e12c686bd48c8ef55ebf86b423371dd3a50f10153e8: Status 404 returned error can't find the container with id a162fcc3fe08b5c4811f9e12c686bd48c8ef55ebf86b423371dd3a50f10153e8
	Jun 10 14:19:54 multinode-007346 kubelet[1592]: W0610 14:19:54.065669    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd/crio/crio-6ddfbed20c88bb971c624dd25b747b08062554bfc97a4618546e6638ef1f8ea0 WatchSource:0}: Error finding container 6ddfbed20c88bb971c624dd25b747b08062554bfc97a4618546e6638ef1f8ea0: Status 404 returned error can't find the container with id 6ddfbed20c88bb971c624dd25b747b08062554bfc97a4618546e6638ef1f8ea0
	Jun 10 14:19:54 multinode-007346 kubelet[1592]: I0610 14:19:54.907164    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-tsnlt" podStartSLOduration=1.907122923 podCreationTimestamp="2023-06-10 14:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 14:19:54.90705557 +0000 UTC m=+15.194783370" watchObservedRunningTime="2023-06-10 14:19:54.907122923 +0000 UTC m=+15.194850725"
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: I0610 14:20:25.314151    1592 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: I0610 14:20:25.336607    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pswh7" podStartSLOduration=32.336558669 podCreationTimestamp="2023-06-10 14:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 14:19:54.917593041 +0000 UTC m=+15.205320840" watchObservedRunningTime="2023-06-10 14:20:25.336558669 +0000 UTC m=+45.624286484"
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: I0610 14:20:25.336852    1592 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: I0610 14:20:25.338637    1592 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: I0610 14:20:25.383060    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mlnp\" (UniqueName: \"kubernetes.io/projected/cd36daa1-b02e-4fe3-a293-11c38f14826b-kube-api-access-8mlnp\") pod \"coredns-5d78c9869d-shl5g\" (UID: \"cd36daa1-b02e-4fe3-a293-11c38f14826b\") " pod="kube-system/coredns-5d78c9869d-shl5g"
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: I0610 14:20:25.383111    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0a8cc618-91de-4c43-9ee7-b1e75d4e44bc-tmp\") pod \"storage-provisioner\" (UID: \"0a8cc618-91de-4c43-9ee7-b1e75d4e44bc\") " pod="kube-system/storage-provisioner"
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: I0610 14:20:25.383132    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd36daa1-b02e-4fe3-a293-11c38f14826b-config-volume\") pod \"coredns-5d78c9869d-shl5g\" (UID: \"cd36daa1-b02e-4fe3-a293-11c38f14826b\") " pod="kube-system/coredns-5d78c9869d-shl5g"
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: I0610 14:20:25.383253    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzc28\" (UniqueName: \"kubernetes.io/projected/0a8cc618-91de-4c43-9ee7-b1e75d4e44bc-kube-api-access-nzc28\") pod \"storage-provisioner\" (UID: \"0a8cc618-91de-4c43-9ee7-b1e75d4e44bc\") " pod="kube-system/storage-provisioner"
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: W0610 14:20:25.657649    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd/crio/crio-27801746ae51c1db28f9f72b40c7ef1cdfe4af3cab3ea77a8cc09b5bca5e770a WatchSource:0}: Error finding container 27801746ae51c1db28f9f72b40c7ef1cdfe4af3cab3ea77a8cc09b5bca5e770a: Status 404 returned error can't find the container with id 27801746ae51c1db28f9f72b40c7ef1cdfe4af3cab3ea77a8cc09b5bca5e770a
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: W0610 14:20:25.675078    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd/crio/crio-d8900a20ddd4df830124c6535734a69c0d3a174cf09391a3149004deea04ff01 WatchSource:0}: Error finding container d8900a20ddd4df830124c6535734a69c0d3a174cf09391a3149004deea04ff01: Status 404 returned error can't find the container with id d8900a20ddd4df830124c6535734a69c0d3a174cf09391a3149004deea04ff01
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: I0610 14:20:25.957337    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-shl5g" podStartSLOduration=32.95729202 podCreationTimestamp="2023-06-10 14:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 14:20:25.957180353 +0000 UTC m=+46.244908154" watchObservedRunningTime="2023-06-10 14:20:25.95729202 +0000 UTC m=+46.245019861"
	Jun 10 14:20:25 multinode-007346 kubelet[1592]: I0610 14:20:25.967097    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.967054164 podCreationTimestamp="2023-06-10 14:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 14:20:25.966976871 +0000 UTC m=+46.254704671" watchObservedRunningTime="2023-06-10 14:20:25.967054164 +0000 UTC m=+46.254781968"
	Jun 10 14:21:28 multinode-007346 kubelet[1592]: I0610 14:21:28.897287    1592 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 14:21:29 multinode-007346 kubelet[1592]: I0610 14:21:29.014331    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw4gt\" (UniqueName: \"kubernetes.io/projected/c4052d27-810a-409a-bbaf-6a3e05c6cec1-kube-api-access-xw4gt\") pod \"busybox-67b7f59bb-6nqgr\" (UID: \"c4052d27-810a-409a-bbaf-6a3e05c6cec1\") " pod="default/busybox-67b7f59bb-6nqgr"
	Jun 10 14:21:29 multinode-007346 kubelet[1592]: W0610 14:21:29.247293    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd/crio/crio-da19aeb6218c29472a3c11d227db891ced106b6da8aa92e12b16fbb18f6948b1 WatchSource:0}: Error finding container da19aeb6218c29472a3c11d227db891ced106b6da8aa92e12b16fbb18f6948b1: Status 404 returned error can't find the container with id da19aeb6218c29472a3c11d227db891ced106b6da8aa92e12b16fbb18f6948b1
	Jun 10 14:21:31 multinode-007346 kubelet[1592]: I0610 14:21:31.069224    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-6nqgr" podStartSLOduration=2.2499117220000002 podCreationTimestamp="2023-06-10 14:21:28 +0000 UTC" firstStartedPulling="2023-06-10 14:21:29.250270645 +0000 UTC m=+109.537998440" lastFinishedPulling="2023-06-10 14:21:30.0695313 +0000 UTC m=+110.357259084" observedRunningTime="2023-06-10 14:21:31.068997441 +0000 UTC m=+111.356725243" watchObservedRunningTime="2023-06-10 14:21:31.069172366 +0000 UTC m=+111.356900168"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-007346 -n multinode-007346
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-007346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.9.0.1508381336.exe start -p running-upgrade-855695 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0610 14:33:36.157553   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.9.0.1508381336.exe start -p running-upgrade-855695 --memory=2200 --vm-driver=docker  --container-runtime=crio: (59.330240739s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-855695 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-855695 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.380675455s)

                                                
                                                
-- stdout --
	* [running-upgrade-855695] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15074
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-855695 in cluster running-upgrade-855695
	* Pulling base image ...
	* Updating the running docker "running-upgrade-855695" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 14:34:22.588463  205524 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:34:22.588599  205524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:34:22.588606  205524 out.go:309] Setting ErrFile to fd 2...
	I0610 14:34:22.588610  205524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:34:22.588753  205524 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	I0610 14:34:22.589403  205524 out.go:303] Setting JSON to false
	I0610 14:34:22.591131  205524 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8218,"bootTime":1686399445,"procs":609,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 14:34:22.591208  205524 start.go:137] virtualization: kvm guest
	I0610 14:34:22.594025  205524 out.go:177] * [running-upgrade-855695] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 14:34:22.596285  205524 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 14:34:22.598970  205524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 14:34:22.596295  205524 notify.go:220] Checking for updates...
	I0610 14:34:22.602291  205524 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:34:22.605164  205524 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	I0610 14:34:22.606927  205524 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 14:34:22.608626  205524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 14:34:22.612142  205524 config.go:182] Loaded profile config "running-upgrade-855695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0610 14:34:22.612176  205524 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
	I0610 14:34:22.616419  205524 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0610 14:34:22.619366  205524 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 14:34:22.649072  205524 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 14:34:22.649163  205524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:34:22.725357  205524 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:80 SystemTime:2023-06-10 14:34:22.713412347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:34:22.725471  205524 docker.go:294] overlay module found
	I0610 14:34:22.727548  205524 out.go:177] * Using the docker driver based on existing profile
	I0610 14:34:22.730786  205524 start.go:297] selected driver: docker
	I0610 14:34:22.730801  205524 start.go:875] validating driver "docker" against &{Name:running-upgrade-855695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-855695 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:34:22.730901  205524 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 14:34:22.731783  205524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:34:22.796251  205524 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:72 SystemTime:2023-06-10 14:34:22.785218975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:34:22.796526  205524 cni.go:84] Creating CNI manager for ""
	I0610 14:34:22.796545  205524 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0610 14:34:22.796552  205524 start_flags.go:319] config:
	{Name:running-upgrade-855695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-855695 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:34:22.799603  205524 out.go:177] * Starting control plane node running-upgrade-855695 in cluster running-upgrade-855695
	I0610 14:34:22.801261  205524 cache.go:122] Beginning downloading kic base image for docker with crio
	I0610 14:34:22.803255  205524 out.go:177] * Pulling base image ...
	I0610 14:34:22.804816  205524 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0610 14:34:22.804913  205524 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 14:34:22.824972  205524 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0610 14:34:22.824996  205524 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	W0610 14:34:22.831955  205524 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0610 14:34:22.832134  205524 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/running-upgrade-855695/config.json ...
	I0610 14:34:22.832142  205524 cache.go:107] acquiring lock: {Name:mk2ab4dc2519af17da12f89893086e52956fc66b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:34:22.832244  205524 cache.go:115] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 14:34:22.832258  205524 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 125.402µs
	I0610 14:34:22.832272  205524 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 14:34:22.832288  205524 cache.go:107] acquiring lock: {Name:mka2cae3df1e597c161975a949393602b7412ad2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:34:22.832334  205524 cache.go:115] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0610 14:34:22.832345  205524 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 59.182µs
	I0610 14:34:22.832358  205524 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0610 14:34:22.832373  205524 cache.go:107] acquiring lock: {Name:mkf003830ff94131c3e96735a8d10c8bee0ad118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:34:22.832409  205524 cache.go:115] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0610 14:34:22.832422  205524 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 51.706µs
	I0610 14:34:22.832431  205524 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0610 14:34:22.832409  205524 cache.go:195] Successfully downloaded all kic artifacts
	I0610 14:34:22.832454  205524 start.go:364] acquiring machines lock for running-upgrade-855695: {Name:mk2134c23416c4e7400d6f83013feca599479451 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:34:22.832436  205524 cache.go:107] acquiring lock: {Name:mk39703faf212fc2c49cb260fa75cabfee9b09bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:34:22.832520  205524 start.go:368] acquired machines lock for "running-upgrade-855695" in 55.85µs
	I0610 14:34:22.832542  205524 start.go:96] Skipping create...Using existing machine configuration
	I0610 14:34:22.832550  205524 fix.go:55] fixHost starting: m01
	I0610 14:34:22.832541  205524 cache.go:107] acquiring lock: {Name:mk4c5c68073c8df747fa53eda49ec3c4eefad188 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:34:22.832592  205524 cache.go:115] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0610 14:34:22.832604  205524 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 65.508µs
	I0610 14:34:22.832624  205524 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0610 14:34:22.832639  205524 cache.go:107] acquiring lock: {Name:mkb090a71cd20228f61c81ffca27c30dedfcb29c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:34:22.832678  205524 cache.go:115] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0610 14:34:22.832687  205524 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 50.107µs
	I0610 14:34:22.832699  205524 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0610 14:34:22.832712  205524 cache.go:107] acquiring lock: {Name:mk37efa358136e7b150f4b40416fb45dd401267c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:34:22.832784  205524 cache.go:115] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0610 14:34:22.832801  205524 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 89.013µs
	I0610 14:34:22.832811  205524 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0610 14:34:22.832836  205524 cli_runner.go:164] Run: docker container inspect running-upgrade-855695 --format={{.State.Status}}
	I0610 14:34:22.832832  205524 cache.go:107] acquiring lock: {Name:mk08d1c890595b8ad704c2441dc419a18c8c88e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:34:22.832939  205524 cache.go:115] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0610 14:34:22.832953  205524 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 124.045µs
	I0610 14:34:22.832967  205524 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0610 14:34:22.832527  205524 cache.go:115] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0610 14:34:22.833010  205524 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 575.91µs
	I0610 14:34:22.833033  205524 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0610 14:34:22.833047  205524 cache.go:87] Successfully saved all images to host disk.
	I0610 14:34:22.855903  205524 fix.go:103] recreateIfNeeded on running-upgrade-855695: state=Running err=<nil>
	W0610 14:34:22.855931  205524 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 14:34:22.858481  205524 out.go:177] * Updating the running docker "running-upgrade-855695" container ...
	I0610 14:34:22.860044  205524 machine.go:88] provisioning docker machine ...
	I0610 14:34:22.860061  205524 ubuntu.go:169] provisioning hostname "running-upgrade-855695"
	I0610 14:34:22.860104  205524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-855695
	I0610 14:34:22.877726  205524 main.go:141] libmachine: Using SSH client type: native
	I0610 14:34:22.878290  205524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0610 14:34:22.878309  205524 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-855695 && echo "running-upgrade-855695" | sudo tee /etc/hostname
	I0610 14:34:22.991456  205524 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-855695
	
	I0610 14:34:22.991531  205524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-855695
	I0610 14:34:23.008530  205524 main.go:141] libmachine: Using SSH client type: native
	I0610 14:34:23.009063  205524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0610 14:34:23.009092  205524 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-855695' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-855695/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-855695' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 14:34:23.143924  205524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 14:34:23.143951  205524 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15074-18675/.minikube CaCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15074-18675/.minikube}
	I0610 14:34:23.143976  205524 ubuntu.go:177] setting up certificates
	I0610 14:34:23.143986  205524 provision.go:83] configureAuth start
	I0610 14:34:23.144039  205524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-855695
	I0610 14:34:23.164058  205524 provision.go:138] copyHostCerts
	I0610 14:34:23.164125  205524 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem, removing ...
	I0610 14:34:23.164134  205524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem
	I0610 14:34:23.164182  205524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem (1078 bytes)
	I0610 14:34:23.164264  205524 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem, removing ...
	I0610 14:34:23.164272  205524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem
	I0610 14:34:23.164293  205524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem (1123 bytes)
	I0610 14:34:23.164340  205524 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem, removing ...
	I0610 14:34:23.164347  205524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem
	I0610 14:34:23.164368  205524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem (1675 bytes)
	I0610 14:34:23.164409  205524 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-855695 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-855695]
	I0610 14:34:23.244892  205524 provision.go:172] copyRemoteCerts
	I0610 14:34:23.244939  205524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 14:34:23.244972  205524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-855695
	I0610 14:34:23.262983  205524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/running-upgrade-855695/id_rsa Username:docker}
	I0610 14:34:23.345325  205524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 14:34:23.361783  205524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0610 14:34:23.377772  205524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 14:34:23.393967  205524 provision.go:86] duration metric: configureAuth took 249.962747ms
	I0610 14:34:23.393992  205524 ubuntu.go:193] setting minikube options for container-runtime
	I0610 14:34:23.394224  205524 config.go:182] Loaded profile config "running-upgrade-855695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0610 14:34:23.394335  205524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-855695
	I0610 14:34:23.412361  205524 main.go:141] libmachine: Using SSH client type: native
	I0610 14:34:23.412941  205524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0610 14:34:23.412972  205524 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 14:34:23.818581  205524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 14:34:23.818615  205524 machine.go:91] provisioned docker machine in 958.559867ms
	I0610 14:34:23.818629  205524 start.go:300] post-start starting for "running-upgrade-855695" (driver="docker")
	I0610 14:34:23.818637  205524 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 14:34:23.818709  205524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 14:34:23.818754  205524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-855695
	I0610 14:34:23.838139  205524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/running-upgrade-855695/id_rsa Username:docker}
	I0610 14:34:23.935334  205524 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 14:34:23.938831  205524 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0610 14:34:23.938858  205524 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0610 14:34:23.938872  205524 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0610 14:34:23.938881  205524 info.go:137] Remote host: Ubuntu 19.10
	I0610 14:34:23.938896  205524 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/addons for local assets ...
	I0610 14:34:23.938962  205524 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/files for local assets ...
	I0610 14:34:23.939067  205524 filesync.go:149] local asset: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem -> 254852.pem in /etc/ssl/certs
	I0610 14:34:23.939197  205524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 14:34:23.975457  205524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem --> /etc/ssl/certs/254852.pem (1708 bytes)
	I0610 14:34:23.993647  205524 start.go:303] post-start completed in 175.007222ms
	I0610 14:34:23.993698  205524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 14:34:23.993737  205524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-855695
	I0610 14:34:24.012958  205524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/running-upgrade-855695/id_rsa Username:docker}
	I0610 14:34:24.090698  205524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0610 14:34:24.094406  205524 fix.go:57] fixHost completed within 1.261851327s
	I0610 14:34:24.094429  205524 start.go:83] releasing machines lock for "running-upgrade-855695", held for 1.261890125s
	I0610 14:34:24.094492  205524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-855695
	I0610 14:34:24.110130  205524 ssh_runner.go:195] Run: cat /version.json
	I0610 14:34:24.110170  205524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-855695
	I0610 14:34:24.110194  205524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 14:34:24.110281  205524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-855695
	I0610 14:34:24.127054  205524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/running-upgrade-855695/id_rsa Username:docker}
	I0610 14:34:24.129075  205524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/running-upgrade-855695/id_rsa Username:docker}
	W0610 14:34:24.234914  205524 start.go:414] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0610 14:34:24.234987  205524 ssh_runner.go:195] Run: systemctl --version
	I0610 14:34:24.238911  205524 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 14:34:24.288985  205524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 14:34:24.293574  205524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:34:24.310075  205524 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0610 14:34:24.310150  205524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:34:24.342111  205524 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 14:34:24.342136  205524 start.go:481] detecting cgroup driver to use...
	I0610 14:34:24.342177  205524 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0610 14:34:24.342303  205524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 14:34:24.398014  205524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 14:34:24.413715  205524 docker.go:193] disabling cri-docker service (if available) ...
	I0610 14:34:24.413767  205524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 14:34:24.425303  205524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 14:34:24.434958  205524 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0610 14:34:24.443856  205524 docker.go:203] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0610 14:34:24.443899  205524 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 14:34:24.536232  205524 docker.go:209] disabling docker service ...
	I0610 14:34:24.536282  205524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 14:34:24.546573  205524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 14:34:24.554840  205524 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 14:34:24.639745  205524 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 14:34:24.868004  205524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 14:34:24.881792  205524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 14:34:24.903604  205524 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0610 14:34:24.903660  205524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:34:24.915778  205524 out.go:177] 
	W0610 14:34:24.917315  205524 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0610 14:34:24.917337  205524 out.go:239] * 
	* 
	W0610 14:34:24.918186  205524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 14:34:24.919990  205524 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-855695 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-06-10 14:34:24.937172989 +0000 UTC m=+1985.821114663
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-855695
helpers_test.go:235: (dbg) docker inspect running-upgrade-855695:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "921f47ad7f4c914d49e07fa2bf627fb959b10178e8856efad3e373962251dd24",
	        "Created": "2023-06-10T14:33:23.506327047Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-10T14:33:23.937526499Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/921f47ad7f4c914d49e07fa2bf627fb959b10178e8856efad3e373962251dd24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/921f47ad7f4c914d49e07fa2bf627fb959b10178e8856efad3e373962251dd24/hostname",
	        "HostsPath": "/var/lib/docker/containers/921f47ad7f4c914d49e07fa2bf627fb959b10178e8856efad3e373962251dd24/hosts",
	        "LogPath": "/var/lib/docker/containers/921f47ad7f4c914d49e07fa2bf627fb959b10178e8856efad3e373962251dd24/921f47ad7f4c914d49e07fa2bf627fb959b10178e8856efad3e373962251dd24-json.log",
	        "Name": "/running-upgrade-855695",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "running-upgrade-855695:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/152378c4a4b0d489d0dd58aea8a106ba35593f9e47a8f3514e1b9814a6752f0d-init/diff:/var/lib/docker/overlay2/6ef56b8fb7dc51346e1db130f5bba1d1db8279e7a4dd2fe2d90acd45fe959dd1/diff:/var/lib/docker/overlay2/ad614f4e7b26a3a72a5e2af803955c546c25378d720a2ad151fc88907e90e9c5/diff:/var/lib/docker/overlay2/999b850990a1706b34ec91d1abb12ba0722651b4a66152f4f2d648ef6b2aabdc/diff:/var/lib/docker/overlay2/b052eb7b71622b232be5e97a02fdb80eefadddeaa4011717be11e10c966f19d1/diff:/var/lib/docker/overlay2/207433e6c0fcffbcf048af27c3e97c8d73f1d4505f33dc57e6f964bcf4b2290a/diff:/var/lib/docker/overlay2/25699ad492babda053e96e97362c427c83e77ad5657c2e96b2fca2d4ef01ff03/diff:/var/lib/docker/overlay2/310060ca3b973b73492fbf2e0abbb93b876db8c674684d548307efa3633afadb/diff:/var/lib/docker/overlay2/58e46afa633a444637f92bb0c893ddef572eb3b3490484faeb0ed1b8b05bd749/diff:/var/lib/docker/overlay2/62b3412a72681448e73cc17898ce06fe16450fa33f37abc39a7dfe1aa72bfd58/diff:/var/lib/docker/overlay2/72314e
3a55344df1a3de0071a6f2518f6d2dde93411fe9ad653d2743d1dca61b/diff:/var/lib/docker/overlay2/b5dd808ea23737511e3bb99d2d7812c9bcad6ef87634c83fd4e190750f182ab2/diff:/var/lib/docker/overlay2/f21635fff99bba65f670ee934f404e3d256f541eff0b3a16a1ccc967696bb8c9/diff:/var/lib/docker/overlay2/48a14b77aa70886e6beef904046632b9513e222e78926ba7fcfeef0eee77d05c/diff:/var/lib/docker/overlay2/86711152360a9a387bce48ca310ef68cde8be0c80710cf11630c2cac17b40233/diff:/var/lib/docker/overlay2/9f045e8add35c60984632468ba8322f70c632fb741c66278a716259621230cb1/diff:/var/lib/docker/overlay2/23874ac6fecd2bbfd9106784ad7f5c8bce6841c811b969f87d7e4da5484da087/diff:/var/lib/docker/overlay2/20748cd3bf88c0e380507408c686d2f2362bdd4cfd9c48e9f535d63617f743f7/diff:/var/lib/docker/overlay2/6c17f27de3771d636a7863937a840430d378bc7047efdc7fed0d08214570e179/diff:/var/lib/docker/overlay2/4095c6947cfcfb36989abdd53907e28b5c79b1f6aa56cc6ea14f74c4cc5255bf/diff:/var/lib/docker/overlay2/2dbde509e9936d30409841e4e477bb7c5449ec0001dee5af99ed384e1264ac88/diff:/var/lib/d
ocker/overlay2/e6d4fe80de572a01a1d6c69614b80a529500e719ba5f1eb45d973c741b85b02b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/152378c4a4b0d489d0dd58aea8a106ba35593f9e47a8f3514e1b9814a6752f0d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/152378c4a4b0d489d0dd58aea8a106ba35593f9e47a8f3514e1b9814a6752f0d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/152378c4a4b0d489d0dd58aea8a106ba35593f9e47a8f3514e1b9814a6752f0d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-855695",
	                "Source": "/var/lib/docker/volumes/running-upgrade-855695/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-855695",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-855695",
	                "name.minikube.sigs.k8s.io": "running-upgrade-855695",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "712f7ccccc7e6e05d97298c62cfd5dfed69619c240b032932256bbebf020936e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/712f7ccccc7e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "5bb6497b23d8bb0a18294984e39ea0c866057529d0c9e638e35a015f1ae07880",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "b6dd8b56e39dca5004b4f29679450cf72cc974969346bf7bfba02a23a19643a0",
	                    "EndpointID": "5bb6497b23d8bb0a18294984e39ea0c866057529d0c9e638e35a015f1ae07880",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-855695 -n running-upgrade-855695
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-855695 -n running-upgrade-855695: exit status 4 (336.572365ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 14:34:25.253558  207103 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-855695" does not appear in /home/jenkins/minikube-integration/15074-18675/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-855695" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-855695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-855695
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-855695: (1.920778455s)
--- FAIL: TestRunningBinaryUpgrade (64.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (92.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.3377897757.exe start -p stopped-upgrade-150295 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.3377897757.exe start -p stopped-upgrade-150295 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m25.559593365s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.9.0.3377897757.exe -p stopped-upgrade-150295 stop
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-150295 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-150295 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.77679201s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-150295] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15074
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-150295 in cluster stopped-upgrade-150295
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-150295" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 14:33:11.207040  193638 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:33:11.207161  193638 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:33:11.207170  193638 out.go:309] Setting ErrFile to fd 2...
	I0610 14:33:11.207177  193638 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:33:11.207299  193638 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	I0610 14:33:11.207784  193638 out.go:303] Setting JSON to false
	I0610 14:33:11.209083  193638 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8146,"bootTime":1686399445,"procs":551,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 14:33:11.209146  193638 start.go:137] virtualization: kvm guest
	I0610 14:33:11.211702  193638 out.go:177] * [stopped-upgrade-150295] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 14:33:11.213423  193638 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 14:33:11.213480  193638 notify.go:220] Checking for updates...
	I0610 14:33:11.215061  193638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 14:33:11.216712  193638 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:33:11.218195  193638 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	I0610 14:33:11.219606  193638 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 14:33:11.221230  193638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 14:33:11.223008  193638 config.go:182] Loaded profile config "stopped-upgrade-150295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0610 14:33:11.223026  193638 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
	I0610 14:33:11.224873  193638 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0610 14:33:11.226375  193638 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 14:33:11.247520  193638 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 14:33:11.247613  193638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:33:11.303967  193638 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:65 SystemTime:2023-06-10 14:33:11.293631087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:33:11.304087  193638 docker.go:294] overlay module found
	I0610 14:33:11.307194  193638 out.go:177] * Using the docker driver based on existing profile
	I0610 14:33:11.308675  193638 start.go:297] selected driver: docker
	I0610 14:33:11.308688  193638 start.go:875] validating driver "docker" against &{Name:stopped-upgrade-150295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-150295 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:33:11.308784  193638 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 14:33:11.309724  193638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:33:11.364714  193638 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:65 SystemTime:2023-06-10 14:33:11.356389643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:33:11.382491  193638 cni.go:84] Creating CNI manager for ""
	I0610 14:33:11.382520  193638 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0610 14:33:11.382533  193638 start_flags.go:319] config:
	{Name:stopped-upgrade-150295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-150295 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:33:11.385046  193638 out.go:177] * Starting control plane node stopped-upgrade-150295 in cluster stopped-upgrade-150295
	I0610 14:33:11.386523  193638 cache.go:122] Beginning downloading kic base image for docker with crio
	I0610 14:33:11.387965  193638 out.go:177] * Pulling base image ...
	I0610 14:33:11.389400  193638 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0610 14:33:11.389581  193638 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 14:33:11.406596  193638 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
	I0610 14:33:11.406622  193638 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
	W0610 14:33:11.422905  193638 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0610 14:33:11.423070  193638 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/stopped-upgrade-150295/config.json ...
	I0610 14:33:11.423172  193638 cache.go:107] acquiring lock: {Name:mk2ab4dc2519af17da12f89893086e52956fc66b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:33:11.423223  193638 cache.go:107] acquiring lock: {Name:mkb090a71cd20228f61c81ffca27c30dedfcb29c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:33:11.423244  193638 cache.go:107] acquiring lock: {Name:mkf003830ff94131c3e96735a8d10c8bee0ad118 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:33:11.423293  193638 cache.go:115] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 14:33:11.423286  193638 cache.go:107] acquiring lock: {Name:mka2cae3df1e597c161975a949393602b7412ad2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:33:11.423306  193638 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.139µs
	I0610 14:33:11.423317  193638 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 14:33:11.423325  193638 cache.go:195] Successfully downloaded all kic artifacts
	I0610 14:33:11.423342  193638 cache.go:107] acquiring lock: {Name:mk4c5c68073c8df747fa53eda49ec3c4eefad188 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:33:11.423347  193638 cache.go:107] acquiring lock: {Name:mk39703faf212fc2c49cb260fa75cabfee9b09bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:33:11.423380  193638 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0610 14:33:11.423400  193638 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I0610 14:33:11.423422  193638 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I0610 14:33:11.423438  193638 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I0610 14:33:11.423453  193638 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0610 14:33:11.423349  193638 start.go:364] acquiring machines lock for stopped-upgrade-150295: {Name:mkd3e9b620e2403e4a3da39ac0dbe16587fbf473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:33:11.423560  193638 cache.go:107] acquiring lock: {Name:mk37efa358136e7b150f4b40416fb45dd401267c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:33:11.423588  193638 cache.go:107] acquiring lock: {Name:mk08d1c890595b8ad704c2441dc419a18c8c88e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 14:33:11.423683  193638 start.go:368] acquired machines lock for "stopped-upgrade-150295" in 140.94µs
	I0610 14:33:11.423695  193638 start.go:96] Skipping create...Using existing machine configuration
	I0610 14:33:11.423701  193638 fix.go:55] fixHost starting: m01
	I0610 14:33:11.423762  193638 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0610 14:33:11.423845  193638 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0610 14:33:11.423976  193638 cli_runner.go:164] Run: docker container inspect stopped-upgrade-150295 --format={{.State.Status}}
	I0610 14:33:11.424589  193638 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I0610 14:33:11.424598  193638 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0610 14:33:11.424641  193638 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0610 14:33:11.424655  193638 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I0610 14:33:11.424723  193638 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0610 14:33:11.424836  193638 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0610 14:33:11.424857  193638 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I0610 14:33:11.445633  193638 fix.go:103] recreateIfNeeded on stopped-upgrade-150295: state=Stopped err=<nil>
	W0610 14:33:11.445655  193638 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 14:33:11.448234  193638 out.go:177] * Restarting existing docker container for "stopped-upgrade-150295" ...
	I0610 14:33:11.449828  193638 cli_runner.go:164] Run: docker start stopped-upgrade-150295
	I0610 14:33:11.590075  193638 cache.go:162] opening:  /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I0610 14:33:11.602961  193638 cache.go:162] opening:  /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I0610 14:33:11.626501  193638 cache.go:162] opening:  /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0610 14:33:11.628333  193638 cache.go:162] opening:  /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0610 14:33:11.640497  193638 cache.go:162] opening:  /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0610 14:33:11.658853  193638 cache.go:162] opening:  /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I0610 14:33:11.664404  193638 cache.go:162] opening:  /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I0610 14:33:11.720933  193638 cli_runner.go:164] Run: docker container inspect stopped-upgrade-150295 --format={{.State.Status}}
	I0610 14:33:11.734308  193638 cache.go:157] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0610 14:33:11.734331  193638 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 310.987155ms
	I0610 14:33:11.734345  193638 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0610 14:33:11.744620  193638 kic.go:426] container "stopped-upgrade-150295" state is running.
	I0610 14:33:11.745123  193638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-150295
	I0610 14:33:11.764216  193638 profile.go:148] Saving config to /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/stopped-upgrade-150295/config.json ...
	I0610 14:33:11.766722  193638 machine.go:88] provisioning docker machine ...
	I0610 14:33:11.766748  193638 ubuntu.go:169] provisioning hostname "stopped-upgrade-150295"
	I0610 14:33:11.766819  193638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-150295
	I0610 14:33:11.790348  193638 main.go:141] libmachine: Using SSH client type: native
	I0610 14:33:11.791049  193638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32971 <nil> <nil>}
	I0610 14:33:11.791076  193638 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-150295 && echo "stopped-upgrade-150295" | sudo tee /etc/hostname
	I0610 14:33:11.791991  193638 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34620->127.0.0.1:32971: read: connection reset by peer
	I0610 14:33:12.141377  193638 cache.go:157] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0610 14:33:12.141401  193638 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 717.814163ms
	I0610 14:33:12.141418  193638 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0610 14:33:12.606159  193638 cache.go:157] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0610 14:33:12.606184  193638 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.182847629s
	I0610 14:33:12.606196  193638 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0610 14:33:12.800840  193638 cache.go:157] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0610 14:33:12.800864  193638 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.377585066s
	I0610 14:33:12.800877  193638 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0610 14:33:12.854761  193638 cache.go:157] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0610 14:33:12.854783  193638 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.431547555s
	I0610 14:33:12.854794  193638 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0610 14:33:12.977189  193638 cache.go:157] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0610 14:33:12.977211  193638 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.553999415s
	I0610 14:33:12.977222  193638 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0610 14:33:13.485979  193638 cache.go:157] /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0610 14:33:13.486006  193638 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.062449622s
	I0610 14:33:13.486016  193638 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0610 14:33:13.486032  193638 cache.go:87] Successfully saved all images to host disk.
	I0610 14:33:14.905504  193638 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-150295
	
	I0610 14:33:14.905582  193638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-150295
	I0610 14:33:14.922753  193638 main.go:141] libmachine: Using SSH client type: native
	I0610 14:33:14.923314  193638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32971 <nil> <nil>}
	I0610 14:33:14.923343  193638 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-150295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-150295/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-150295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 14:33:15.033847  193638 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 14:33:15.033880  193638 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15074-18675/.minikube CaCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15074-18675/.minikube}
	I0610 14:33:15.033925  193638 ubuntu.go:177] setting up certificates
	I0610 14:33:15.033933  193638 provision.go:83] configureAuth start
	I0610 14:33:15.033980  193638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-150295
	I0610 14:33:15.049218  193638 provision.go:138] copyHostCerts
	I0610 14:33:15.049275  193638 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem, removing ...
	I0610 14:33:15.049288  193638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem
	I0610 14:33:15.049329  193638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/key.pem (1675 bytes)
	I0610 14:33:15.049406  193638 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem, removing ...
	I0610 14:33:15.049413  193638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem
	I0610 14:33:15.049430  193638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/ca.pem (1078 bytes)
	I0610 14:33:15.049478  193638 exec_runner.go:144] found /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem, removing ...
	I0610 14:33:15.049485  193638 exec_runner.go:203] rm: /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem
	I0610 14:33:15.049499  193638 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15074-18675/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15074-18675/.minikube/cert.pem (1123 bytes)
	I0610 14:33:15.049543  193638 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-150295 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-150295]
	I0610 14:33:15.274435  193638 provision.go:172] copyRemoteCerts
	I0610 14:33:15.274479  193638 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 14:33:15.274509  193638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-150295
	I0610 14:33:15.289422  193638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32971 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/stopped-upgrade-150295/id_rsa Username:docker}
	I0610 14:33:15.369612  193638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 14:33:15.385578  193638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0610 14:33:15.400972  193638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 14:33:15.416468  193638 provision.go:86] duration metric: configureAuth took 382.522639ms
	I0610 14:33:15.416491  193638 ubuntu.go:193] setting minikube options for container-runtime
	I0610 14:33:15.416663  193638 config.go:182] Loaded profile config "stopped-upgrade-150295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0610 14:33:15.416744  193638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-150295
	I0610 14:33:15.432400  193638 main.go:141] libmachine: Using SSH client type: native
	I0610 14:33:15.432797  193638 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 127.0.0.1 32971 <nil> <nil>}
	I0610 14:33:15.432815  193638 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 14:33:16.152236  193638 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 14:33:16.152262  193638 machine.go:91] provisioned docker machine in 4.385522395s
	I0610 14:33:16.152275  193638 start.go:300] post-start starting for "stopped-upgrade-150295" (driver="docker")
	I0610 14:33:16.152284  193638 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 14:33:16.152365  193638 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 14:33:16.152407  193638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-150295
	I0610 14:33:16.168410  193638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32971 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/stopped-upgrade-150295/id_rsa Username:docker}
	I0610 14:33:16.253163  193638 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 14:33:16.256170  193638 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0610 14:33:16.256201  193638 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0610 14:33:16.256216  193638 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0610 14:33:16.256229  193638 info.go:137] Remote host: Ubuntu 19.10
	I0610 14:33:16.256243  193638 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/addons for local assets ...
	I0610 14:33:16.256304  193638 filesync.go:126] Scanning /home/jenkins/minikube-integration/15074-18675/.minikube/files for local assets ...
	I0610 14:33:16.256399  193638 filesync.go:149] local asset: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem -> 254852.pem in /etc/ssl/certs
	I0610 14:33:16.256512  193638 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 14:33:16.262914  193638 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/ssl/certs/254852.pem --> /etc/ssl/certs/254852.pem (1708 bytes)
	I0610 14:33:16.279775  193638 start.go:303] post-start completed in 127.453693ms
	I0610 14:33:16.279841  193638 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 14:33:16.279886  193638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-150295
	I0610 14:33:16.297622  193638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32971 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/stopped-upgrade-150295/id_rsa Username:docker}
	I0610 14:33:16.378233  193638 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0610 14:33:16.381724  193638 fix.go:57] fixHost completed within 4.958017609s
	I0610 14:33:16.381756  193638 start.go:83] releasing machines lock for "stopped-upgrade-150295", held for 4.95805467s
	I0610 14:33:16.381820  193638 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-150295
	I0610 14:33:16.397244  193638 ssh_runner.go:195] Run: cat /version.json
	I0610 14:33:16.397295  193638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-150295
	I0610 14:33:16.397364  193638 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 14:33:16.397419  193638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-150295
	I0610 14:33:16.415517  193638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32971 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/stopped-upgrade-150295/id_rsa Username:docker}
	I0610 14:33:16.418970  193638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32971 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/stopped-upgrade-150295/id_rsa Username:docker}
	W0610 14:33:16.520238  193638 start.go:414] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0610 14:33:16.520330  193638 ssh_runner.go:195] Run: systemctl --version
	I0610 14:33:16.524075  193638 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 14:33:16.575143  193638 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 14:33:16.579291  193638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:33:16.593571  193638 cni.go:220] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0610 14:33:16.593637  193638 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 14:33:16.615296  193638 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 14:33:16.615317  193638 start.go:481] detecting cgroup driver to use...
	I0610 14:33:16.615345  193638 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0610 14:33:16.615386  193638 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 14:33:16.635094  193638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 14:33:16.644332  193638 docker.go:193] disabling cri-docker service (if available) ...
	I0610 14:33:16.644374  193638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 14:33:16.652657  193638 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 14:33:16.660869  193638 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0610 14:33:16.671184  193638 docker.go:203] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0610 14:33:16.671240  193638 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 14:33:16.729162  193638 docker.go:209] disabling docker service ...
	I0610 14:33:16.729243  193638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 14:33:16.738412  193638 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 14:33:16.746748  193638 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 14:33:16.815666  193638 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 14:33:16.895925  193638 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 14:33:16.907893  193638 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 14:33:16.923474  193638 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0610 14:33:16.923533  193638 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 14:33:16.936072  193638 out.go:177] 
	W0610 14:33:16.937792  193638 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0610 14:33:16.937818  193638 out.go:239] * 
	* 
	W0610 14:33:16.938842  193638 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 14:33:16.940733  193638 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-150295 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (92.30s)

                                                
                                    

Test pass (273/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.02
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.27.2/json-events 5.68
11 TestDownloadOnly/v1.27.2/preload-exists 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.18
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
18 TestDownloadOnlyKic 1.14
19 TestBinaryMirror 0.67
20 TestOffline 79.21
22 TestAddons/Setup 121.99
24 TestAddons/parallel/Registry 13.07
26 TestAddons/parallel/InspektorGadget 10.57
27 TestAddons/parallel/MetricsServer 5.46
28 TestAddons/parallel/HelmTiller 8.48
30 TestAddons/parallel/CSI 55.03
31 TestAddons/parallel/Headlamp 11.04
32 TestAddons/parallel/CloudSpanner 5.31
35 TestAddons/serial/GCPAuth/Namespaces 0.11
36 TestAddons/StoppedEnableDisable 12.03
37 TestCertOptions 34.85
38 TestCertExpiration 231.39
40 TestForceSystemdFlag 26.39
41 TestForceSystemdEnv 37.13
42 TestKVMDriverInstallOrUpdate 2.33
46 TestErrorSpam/setup 20.87
47 TestErrorSpam/start 0.55
48 TestErrorSpam/status 0.78
49 TestErrorSpam/pause 1.38
50 TestErrorSpam/unpause 1.38
51 TestErrorSpam/stop 1.33
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 70.11
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 28.23
58 TestFunctional/serial/KubeContext 0.04
59 TestFunctional/serial/KubectlGetPods 0.08
62 TestFunctional/serial/CacheCmd/cache/add_remote 2.69
63 TestFunctional/serial/CacheCmd/cache/add_local 0.68
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
65 TestFunctional/serial/CacheCmd/cache/list 0.04
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
67 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
68 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/serial/MinikubeKubectlCmd 0.1
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
71 TestFunctional/serial/ExtraConfig 32.88
72 TestFunctional/serial/ComponentHealth 0.07
73 TestFunctional/serial/LogsCmd 1.36
74 TestFunctional/serial/LogsFileCmd 1.29
76 TestFunctional/parallel/ConfigCmd 0.32
77 TestFunctional/parallel/DashboardCmd 8.19
78 TestFunctional/parallel/DryRun 0.43
79 TestFunctional/parallel/InternationalLanguage 0.18
80 TestFunctional/parallel/StatusCmd 0.95
84 TestFunctional/parallel/ServiceCmdConnect 7.66
85 TestFunctional/parallel/AddonsCmd 0.12
86 TestFunctional/parallel/PersistentVolumeClaim 29.41
88 TestFunctional/parallel/SSHCmd 0.56
89 TestFunctional/parallel/CpCmd 1.14
90 TestFunctional/parallel/MySQL 23.56
91 TestFunctional/parallel/FileSync 0.36
92 TestFunctional/parallel/CertSync 1.95
96 TestFunctional/parallel/NodeLabels 0.11
98 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
100 TestFunctional/parallel/License 0.14
101 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
102 TestFunctional/parallel/Version/short 0.04
103 TestFunctional/parallel/Version/components 0.63
104 TestFunctional/parallel/ImageCommands/ImageListShort 1.32
105 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
106 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
107 TestFunctional/parallel/ImageCommands/ImageListYaml 1.14
108 TestFunctional/parallel/ImageCommands/ImageBuild 2.49
109 TestFunctional/parallel/ImageCommands/Setup 1.12
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
114 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
115 TestFunctional/parallel/ProfileCmd/profile_list 0.33
116 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
117 TestFunctional/parallel/ServiceCmd/List 0.31
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
120 TestFunctional/parallel/ServiceCmd/Format 0.32
121 TestFunctional/parallel/ServiceCmd/URL 0.34
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.37
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.31
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.54
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.83
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.14
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.2
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/MountCmd/any-port 7.22
140 TestFunctional/parallel/MountCmd/specific-port 1.53
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.49
142 TestFunctional/delete_addon-resizer_images 0.07
143 TestFunctional/delete_my-image_image 0.01
144 TestFunctional/delete_minikube_cached_images 0.01
148 TestIngressAddonLegacy/StartLegacyK8sCluster 58.55
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.49
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.33
155 TestJSONOutput/start/Command 66.73
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.6
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.55
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 5.74
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.18
180 TestKicCustomNetwork/create_custom_network 33.55
181 TestKicCustomNetwork/use_default_bridge_network 23.3
182 TestKicExistingNetwork 26.55
183 TestKicCustomSubnet 26.59
184 TestKicStaticIP 24.66
185 TestMainNoArgs 0.04
186 TestMinikubeProfile 47.71
189 TestMountStart/serial/StartWithMountFirst 7.76
190 TestMountStart/serial/VerifyMountFirst 0.22
191 TestMountStart/serial/StartWithMountSecond 7.63
192 TestMountStart/serial/VerifyMountSecond 0.21
193 TestMountStart/serial/DeleteFirst 1.57
194 TestMountStart/serial/VerifyMountPostDelete 0.22
195 TestMountStart/serial/Stop 1.19
196 TestMountStart/serial/RestartStopped 6.59
197 TestMountStart/serial/VerifyMountPostStop 0.22
200 TestMultiNode/serial/FreshStart2Nodes 127.49
201 TestMultiNode/serial/DeployApp2Nodes 3.75
203 TestMultiNode/serial/AddNode 16.25
204 TestMultiNode/serial/ProfileList 0.25
205 TestMultiNode/serial/CopyFile 8.13
206 TestMultiNode/serial/StopNode 2.01
207 TestMultiNode/serial/StartAfterStop 10.81
208 TestMultiNode/serial/RestartKeepsNodes 110.45
209 TestMultiNode/serial/DeleteNode 4.52
210 TestMultiNode/serial/StopMultiNode 23.76
211 TestMultiNode/serial/RestartMultiNode 76.06
212 TestMultiNode/serial/ValidateNameConflict 23.01
217 TestPreload 150.99
219 TestScheduledStopUnix 98.58
222 TestInsufficientStorage 12.3
225 TestKubernetesUpgrade 355.89
226 TestMissingContainerUpgrade 147.55
228 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
232 TestNoKubernetes/serial/StartWithK8s 35.76
237 TestNetworkPlugins/group/false 8.25
241 TestNoKubernetes/serial/StartWithStopK8s 8.03
242 TestNoKubernetes/serial/Start 6.89
243 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
244 TestNoKubernetes/serial/ProfileList 1.28
245 TestNoKubernetes/serial/Stop 1.2
246 TestNoKubernetes/serial/StartNoArgs 6.55
247 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
248 TestStoppedBinaryUpgrade/Setup 0.6
250 TestStoppedBinaryUpgrade/MinikubeLogs 0.61
259 TestPause/serial/Start 71.13
260 TestNetworkPlugins/group/auto/Start 72.36
261 TestNetworkPlugins/group/kindnet/Start 67.76
262 TestPause/serial/SecondStartNoReconfiguration 43.08
263 TestNetworkPlugins/group/auto/KubeletFlags 0.24
264 TestNetworkPlugins/group/auto/NetCatPod 10.38
265 TestNetworkPlugins/group/auto/DNS 0.18
266 TestNetworkPlugins/group/auto/Localhost 0.13
267 TestNetworkPlugins/group/auto/HairPin 0.13
268 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
269 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
270 TestNetworkPlugins/group/kindnet/NetCatPod 10.35
271 TestNetworkPlugins/group/kindnet/DNS 0.17
272 TestNetworkPlugins/group/kindnet/Localhost 0.14
273 TestNetworkPlugins/group/kindnet/HairPin 0.14
274 TestNetworkPlugins/group/calico/Start 62.11
275 TestPause/serial/Pause 0.81
276 TestPause/serial/VerifyStatus 0.31
277 TestPause/serial/Unpause 0.64
278 TestPause/serial/PauseAgain 0.78
279 TestPause/serial/DeletePaused 2.75
280 TestPause/serial/VerifyDeletedResources 18.66
281 TestNetworkPlugins/group/custom-flannel/Start 54.57
282 TestNetworkPlugins/group/enable-default-cni/Start 41.95
283 TestNetworkPlugins/group/calico/ControllerPod 5.02
284 TestNetworkPlugins/group/calico/KubeletFlags 0.24
285 TestNetworkPlugins/group/calico/NetCatPod 9.35
286 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
287 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.42
288 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
289 TestNetworkPlugins/group/calico/DNS 0.2
290 TestNetworkPlugins/group/calico/Localhost 0.16
291 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.37
292 TestNetworkPlugins/group/calico/HairPin 0.2
293 TestNetworkPlugins/group/custom-flannel/DNS 0.22
294 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
295 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
296 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
297 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
298 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
299 TestNetworkPlugins/group/flannel/Start 60.1
300 TestNetworkPlugins/group/bridge/Start 40.23
302 TestStartStop/group/old-k8s-version/serial/FirstStart 123.49
304 TestStartStop/group/no-preload/serial/FirstStart 60.4
305 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
306 TestNetworkPlugins/group/bridge/NetCatPod 9.33
307 TestNetworkPlugins/group/bridge/DNS 0.2
308 TestNetworkPlugins/group/bridge/Localhost 0.18
309 TestNetworkPlugins/group/bridge/HairPin 0.21
310 TestNetworkPlugins/group/flannel/ControllerPod 5.02
311 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
312 TestNetworkPlugins/group/flannel/NetCatPod 10.33
313 TestNetworkPlugins/group/flannel/DNS 0.17
314 TestNetworkPlugins/group/flannel/Localhost 0.19
315 TestNetworkPlugins/group/flannel/HairPin 0.13
317 TestStartStop/group/embed-certs/serial/FirstStart 67.89
318 TestStartStop/group/no-preload/serial/DeployApp 8.59
319 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.87
320 TestStartStop/group/no-preload/serial/Stop 11.97
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 67.49
323 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
324 TestStartStop/group/no-preload/serial/SecondStart 340.25
325 TestStartStop/group/old-k8s-version/serial/DeployApp 7.37
326 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.59
327 TestStartStop/group/old-k8s-version/serial/Stop 11.97
328 TestStartStop/group/embed-certs/serial/DeployApp 7.39
329 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
330 TestStartStop/group/old-k8s-version/serial/SecondStart 451.45
331 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.72
332 TestStartStop/group/embed-certs/serial/Stop 14.51
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.39
334 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
335 TestStartStop/group/embed-certs/serial/SecondStart 340.69
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.72
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.06
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.14
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 340.18
340 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.02
341 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
342 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
343 TestStartStop/group/no-preload/serial/Pause 2.48
345 TestStartStop/group/newest-cni/serial/FirstStart 34.12
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.77
348 TestStartStop/group/newest-cni/serial/Stop 1.24
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
350 TestStartStop/group/newest-cni/serial/SecondStart 27.27
351 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.03
352 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
353 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.04
355 TestStartStop/group/embed-certs/serial/Pause 3.22
356 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
359 TestStartStop/group/newest-cni/serial/Pause 2.94
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
361 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
362 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.4
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
365 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
366 TestStartStop/group/old-k8s-version/serial/Pause 2.39
x
+
TestDownloadOnly/v1.16.0/json-events (7.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-735343 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-735343 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.023438824s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-735343
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-735343: exit status 85 (52.996119ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-735343 | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC |          |
	|         | -p download-only-735343        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 14:01:19
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 14:01:19.182457   25497 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:01:19.182587   25497 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:01:19.182598   25497 out.go:309] Setting ErrFile to fd 2...
	I0610 14:01:19.182602   25497 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:01:19.182717   25497 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	W0610 14:01:19.182831   25497 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15074-18675/.minikube/config/config.json: open /home/jenkins/minikube-integration/15074-18675/.minikube/config/config.json: no such file or directory
	I0610 14:01:19.183415   25497 out.go:303] Setting JSON to true
	I0610 14:01:19.184310   25497 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6234,"bootTime":1686399445,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 14:01:19.184364   25497 start.go:137] virtualization: kvm guest
	I0610 14:01:19.187435   25497 out.go:97] [download-only-735343] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 14:01:19.189250   25497 out.go:169] MINIKUBE_LOCATION=15074
	W0610 14:01:19.187554   25497 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 14:01:19.187614   25497 notify.go:220] Checking for updates...
	I0610 14:01:19.192684   25497 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 14:01:19.194577   25497 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:01:19.196259   25497 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	I0610 14:01:19.197897   25497 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0610 14:01:19.201003   25497 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 14:01:19.201219   25497 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 14:01:19.221707   25497 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 14:01:19.221812   25497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:01:19.553130   25497 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-06-10 14:01:19.545437351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:01:19.553250   25497 docker.go:294] overlay module found
	I0610 14:01:19.555265   25497 out.go:97] Using the docker driver based on user configuration
	I0610 14:01:19.555284   25497 start.go:297] selected driver: docker
	I0610 14:01:19.555294   25497 start.go:875] validating driver "docker" against <nil>
	I0610 14:01:19.555377   25497 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:01:19.604149   25497 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-06-10 14:01:19.596067586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:01:19.604328   25497 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 14:01:19.604866   25497 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32101MB, container=32101MB
	I0610 14:01:19.605049   25497 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 14:01:19.607310   25497 out.go:169] Using Docker driver with root privileges
	I0610 14:01:19.608873   25497 cni.go:84] Creating CNI manager for ""
	I0610 14:01:19.608898   25497 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0610 14:01:19.608912   25497 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 14:01:19.608928   25497 start_flags.go:319] config:
	{Name:download-only-735343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-735343 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:01:19.610782   25497 out.go:97] Starting control plane node download-only-735343 in cluster download-only-735343
	I0610 14:01:19.610813   25497 cache.go:122] Beginning downloading kic base image for docker with crio
	I0610 14:01:19.613716   25497 out.go:97] Pulling base image ...
	I0610 14:01:19.613750   25497 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0610 14:01:19.613787   25497 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 14:01:19.627352   25497 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b to local cache
	I0610 14:01:19.627500   25497 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory
	I0610 14:01:19.627576   25497 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b to local cache
	I0610 14:01:19.639831   25497 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0610 14:01:19.639858   25497 cache.go:57] Caching tarball of preloaded images
	I0610 14:01:19.639966   25497 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0610 14:01:19.642060   25497 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0610 14:01:19.642071   25497 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0610 14:01:19.663912   25497 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0610 14:01:23.743811   25497 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-735343"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (5.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-735343 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-735343 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.677042944s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (5.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-735343
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-735343: exit status 85 (55.91474ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-735343 | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC |          |
	|         | -p download-only-735343        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-735343 | jenkins | v1.30.1 | 10 Jun 23 14:01 UTC |          |
	|         | -p download-only-735343        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 14:01:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 14:01:26.259857   25655 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:01:26.259979   25655 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:01:26.259987   25655 out.go:309] Setting ErrFile to fd 2...
	I0610 14:01:26.259992   25655 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:01:26.260109   25655 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	W0610 14:01:26.260234   25655 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15074-18675/.minikube/config/config.json: open /home/jenkins/minikube-integration/15074-18675/.minikube/config/config.json: no such file or directory
	I0610 14:01:26.260581   25655 out.go:303] Setting JSON to true
	I0610 14:01:26.261375   25655 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6241,"bootTime":1686399445,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 14:01:26.261426   25655 start.go:137] virtualization: kvm guest
	I0610 14:01:26.264029   25655 out.go:97] [download-only-735343] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 14:01:26.265645   25655 out.go:169] MINIKUBE_LOCATION=15074
	I0610 14:01:26.264158   25655 notify.go:220] Checking for updates...
	I0610 14:01:26.268798   25655 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 14:01:26.270378   25655 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:01:26.272797   25655 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	I0610 14:01:26.274397   25655 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0610 14:01:26.277318   25655 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 14:01:26.277653   25655 config.go:182] Loaded profile config "download-only-735343": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0610 14:01:26.277697   25655 start.go:783] api.Load failed for download-only-735343: filestore "download-only-735343": Docker machine "download-only-735343" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0610 14:01:26.277772   25655 driver.go:375] Setting default libvirt URI to qemu:///system
	W0610 14:01:26.277797   25655 start.go:783] api.Load failed for download-only-735343: filestore "download-only-735343": Docker machine "download-only-735343" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0610 14:01:26.297527   25655 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 14:01:26.297584   25655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:01:26.342371   25655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-06-10 14:01:26.334901538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:01:26.342457   25655 docker.go:294] overlay module found
	I0610 14:01:26.344422   25655 out.go:97] Using the docker driver based on existing profile
	I0610 14:01:26.344452   25655 start.go:297] selected driver: docker
	I0610 14:01:26.344457   25655 start.go:875] validating driver "docker" against &{Name:download-only-735343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-735343 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP:}
	I0610 14:01:26.344592   25655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:01:26.386483   25655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-06-10 14:01:26.379245584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:01:26.387021   25655 cni.go:84] Creating CNI manager for ""
	I0610 14:01:26.387037   25655 cni.go:142] "docker" driver + "crio" runtime found, recommending kindnet
	I0610 14:01:26.387046   25655 start_flags.go:319] config:
	{Name:download-only-735343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-735343 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:01:26.389447   25655 out.go:97] Starting control plane node download-only-735343 in cluster download-only-735343
	I0610 14:01:26.389467   25655 cache.go:122] Beginning downloading kic base image for docker with crio
	I0610 14:01:26.391115   25655 out.go:97] Pulling base image ...
	I0610 14:01:26.391135   25655 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0610 14:01:26.391238   25655 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
	I0610 14:01:26.404865   25655 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b to local cache
	I0610 14:01:26.404973   25655 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory
	I0610 14:01:26.404997   25655 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local cache directory, skipping pull
	I0610 14:01:26.405007   25655 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in cache, skipping pull
	I0610 14:01:26.405020   25655 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b as a tarball
	I0610 14:01:26.411299   25655 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4
	I0610 14:01:26.411325   25655 cache.go:57] Caching tarball of preloaded images
	I0610 14:01:26.411446   25655 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime crio
	I0610 14:01:26.413433   25655 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0610 14:01:26.413444   25655 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 ...
	I0610 14:01:26.454432   25655 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9aab8d7df6abf9830e86bd030b106830 -> /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4
	I0610 14:01:30.442966   25655 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 ...
	I0610 14:01:30.443047   25655 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15074-18675/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-735343"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-735343
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.14s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-347332 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-347332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-347332
--- PASS: TestDownloadOnlyKic (1.14s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-545398 --alsologtostderr --binary-mirror http://127.0.0.1:38625 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-545398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-545398
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (79.21s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-126429 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-126429 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m16.21521812s)
helpers_test.go:175: Cleaning up "offline-crio-126429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-126429
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-126429: (2.991966826s)
--- PASS: TestOffline (79.21s)

                                                
                                    
x
+
TestAddons/Setup (121.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-060929 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-060929 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m1.987926028s)
--- PASS: TestAddons/Setup (121.99s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 11.962323ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-d9t4w" [e1c46cbb-c6f4-4d60-b97f-8ca3827c1901] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008605137s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nnzpl" [02e1f946-92ca-4b6a-b967-b7f201fbbfd3] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007072178s
addons_test.go:316: (dbg) Run:  kubectl --context addons-060929 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-060929 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-060929 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.548897534s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-060929 ip
2023/06/10 14:03:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-060929 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.07s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.57s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ks8g9" [4a3c557e-89f5-4fc2-be58-d806bc5a1efc] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006749988s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-060929
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-060929: (5.56352127s)
--- PASS: TestAddons/parallel/InspektorGadget (10.57s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 12.023775ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-srvl6" [ac754de1-7b0e-4135-8476-229f6fab6e23] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008440386s
addons_test.go:391: (dbg) Run:  kubectl --context addons-060929 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-060929 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.46s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (8.48s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 4.276661ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-c5kmk" [53bf6ef3-aa47-441c-84eb-b4b76668ae56] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007257504s
addons_test.go:449: (dbg) Run:  kubectl --context addons-060929 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-060929 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.178815391s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-060929 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (8.48s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.03s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 13.009345ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-060929 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-060929 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [520d1ade-9d69-400b-81e2-7e9dc44f7d13] Pending
helpers_test.go:344: "task-pv-pod" [520d1ade-9d69-400b-81e2-7e9dc44f7d13] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [520d1ade-9d69-400b-81e2-7e9dc44f7d13] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.006000769s
addons_test.go:560: (dbg) Run:  kubectl --context addons-060929 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-060929 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-060929 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-060929 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-060929 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-060929 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-060929 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-060929 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f4317dc7-1476-4d1c-b54f-57c12a28263d] Pending
helpers_test.go:344: "task-pv-pod-restore" [f4317dc7-1476-4d1c-b54f-57c12a28263d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f4317dc7-1476-4d1c-b54f-57c12a28263d] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.007799303s
addons_test.go:602: (dbg) Run:  kubectl --context addons-060929 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-060929 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-060929 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-060929 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-060929 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.296016057s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-060929 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.03s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-060929 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-060929 --alsologtostderr -v=1: (1.029315038s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-c42vq" [e0c37b63-fd36-4531-ab50-d130bc22efac] Pending
helpers_test.go:344: "headlamp-6b5756787-c42vq" [e0c37b63-fd36-4531-ab50-d130bc22efac] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-c42vq" [e0c37b63-fd36-4531-ab50-d130bc22efac] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.012804021s
--- PASS: TestAddons/parallel/Headlamp (11.04s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-fb67554b8-kqwzw" [629291ce-715d-42a0-8c11-d531cb2e16e2] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.036064557s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-060929
--- PASS: TestAddons/parallel/CloudSpanner (5.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-060929 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-060929 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.03s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-060929
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-060929: (11.857575867s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-060929
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-060929
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-060929
--- PASS: TestAddons/StoppedEnableDisable (12.03s)

                                                
                                    
x
+
TestCertOptions (34.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-656483 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-656483 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.018914789s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-656483 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-656483 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-656483 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-656483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-656483
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-656483: (3.024621121s)
--- PASS: TestCertOptions (34.85s)

                                                
                                    
x
+
TestCertExpiration (231.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-426764 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0610 14:31:08.926721   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-426764 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.669503617s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-426764 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-426764 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.708965037s)
helpers_test.go:175: Cleaning up "cert-expiration-426764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-426764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-426764: (2.007517824s)
--- PASS: TestCertExpiration (231.39s)

                                                
                                    
x
+
TestForceSystemdFlag (26.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-007398 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-007398 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.844787151s)
docker_test.go:126: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-007398 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-007398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-007398
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-007398: (2.316991794s)
--- PASS: TestForceSystemdFlag (26.39s)

                                                
                                    
x
+
TestForceSystemdEnv (37.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-156040 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-156040 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.1427622s)
helpers_test.go:175: Cleaning up "force-systemd-env-156040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-156040
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-156040: (1.982823211s)
--- PASS: TestForceSystemdEnv (37.13s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.33s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.33s)

                                                
                                    
x
+
TestErrorSpam/setup (20.87s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-339877 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-339877 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-339877 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-339877 --driver=docker  --container-runtime=crio: (20.869639597s)
--- PASS: TestErrorSpam/setup (20.87s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 unpause
--- PASS: TestErrorSpam/unpause (1.38s)

                                                
                                    
x
+
TestErrorSpam/stop (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 stop: (1.171239261s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-339877 --log_dir /tmp/nospam-339877 stop
--- PASS: TestErrorSpam/stop (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/15074-18675/.minikube/files/etc/test/nested/copy/25485/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-amd64 start -p functional-742762 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0610 14:08:36.160590   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:08:36.166418   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:08:36.176706   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:08:36.199847   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:08:36.240114   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
functional_test.go:2229: (dbg) Done: out/minikube-linux-amd64 start -p functional-742762 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m10.104724107s)
--- PASS: TestFunctional/serial/StartWithProxy (70.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-742762 --alsologtostderr -v=8
E0610 14:08:36.320447   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:08:36.481191   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:08:36.801672   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:08:37.442004   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:08:38.722525   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:08:41.283485   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:08:46.404627   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:08:56.645572   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-742762 --alsologtostderr -v=8: (28.229282345s)
functional_test.go:658: soft start took 28.230000299s for "functional-742762" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-742762 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-742762 /tmp/TestFunctionalserialCacheCmdcacheadd_local649032660/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 cache add minikube-local-cache-test:functional-742762
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 cache delete minikube-local-cache-test:functional-742762
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-742762
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-742762 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (248.733707ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 cache reload
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 kubectl -- --context functional-742762 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-742762 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-742762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0610 14:09:17.126302   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-742762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.878513899s)
functional_test.go:756: restart took 32.878620876s for "functional-742762" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-742762 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-742762 logs: (1.361813565s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 logs --file /tmp/TestFunctionalserialLogsFileCmd960958374/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-742762 logs --file /tmp/TestFunctionalserialLogsFileCmd960958374/001/logs.txt: (1.286722957s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-742762 config get cpus: exit status 14 (60.531835ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-742762 config get cpus: exit status 14 (43.364417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-742762 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-742762 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 60012: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-742762 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-742762 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.477855ms)

                                                
                                                
-- stdout --
	* [functional-742762] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15074
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 14:09:49.777092   56138 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:09:49.777248   56138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:09:49.777258   56138 out.go:309] Setting ErrFile to fd 2...
	I0610 14:09:49.777262   56138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:09:49.777415   56138 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	I0610 14:09:49.778113   56138 out.go:303] Setting JSON to false
	I0610 14:09:49.779398   56138 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6745,"bootTime":1686399445,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 14:09:49.779463   56138 start.go:137] virtualization: kvm guest
	I0610 14:09:49.782274   56138 out.go:177] * [functional-742762] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 14:09:49.784406   56138 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 14:09:49.784477   56138 notify.go:220] Checking for updates...
	I0610 14:09:49.786186   56138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 14:09:49.788332   56138 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:09:49.789956   56138 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	I0610 14:09:49.792519   56138 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 14:09:49.794347   56138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 14:09:49.796534   56138 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:09:49.797145   56138 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 14:09:49.821399   56138 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 14:09:49.821510   56138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:09:49.896645   56138 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2023-06-10 14:09:49.882563407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:09:49.896779   56138 docker.go:294] overlay module found
	I0610 14:09:49.898864   56138 out.go:177] * Using the docker driver based on existing profile
	I0610 14:09:49.900554   56138 start.go:297] selected driver: docker
	I0610 14:09:49.900568   56138 start.go:875] validating driver "docker" against &{Name:functional-742762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-742762 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:09:49.900698   56138 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 14:09:49.903809   56138 out.go:177] 
	W0610 14:09:49.905819   56138 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0610 14:09:49.908589   56138 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-742762 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-742762 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-742762 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (183.344343ms)

                                                
                                                
-- stdout --
	* [functional-742762] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15074
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 14:09:50.232651   56312 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:09:50.232767   56312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:09:50.232774   56312 out.go:309] Setting ErrFile to fd 2...
	I0610 14:09:50.232779   56312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:09:50.232944   56312 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	I0610 14:09:50.233442   56312 out.go:303] Setting JSON to false
	I0610 14:09:50.234410   56312 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6745,"bootTime":1686399445,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 14:09:50.234472   56312 start.go:137] virtualization: kvm guest
	I0610 14:09:50.237073   56312 out.go:177] * [functional-742762] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	I0610 14:09:50.238962   56312 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 14:09:50.238975   56312 notify.go:220] Checking for updates...
	I0610 14:09:50.240665   56312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 14:09:50.242458   56312 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:09:50.244250   56312 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	I0610 14:09:50.245980   56312 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 14:09:50.247717   56312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 14:09:50.249626   56312 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:09:50.250072   56312 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 14:09:50.278834   56312 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 14:09:50.278946   56312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:09:50.337928   56312 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2023-06-10 14:09:50.33017192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:09:50.338013   56312 docker.go:294] overlay module found
	I0610 14:09:50.341401   56312 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0610 14:09:50.343116   56312 start.go:297] selected driver: docker
	I0610 14:09:50.343130   56312 start.go:875] validating driver "docker" against &{Name:functional-742762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-742762 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0610 14:09:50.343235   56312 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 14:09:50.345823   56312 out.go:177] 
	W0610 14:09:50.347628   56312 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0610 14:09:50.349200   56312 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-742762 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-742762 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-8465h" [e4e59e00-2daa-4c97-a609-810e5bc2dd33] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-8465h" [e4e59e00-2daa-4c97-a609-810e5bc2dd33] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.007117574s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.49.2:30569
functional_test.go:1673: http://192.168.49.2:30569: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-8465h

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30569
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6d34b145-b019-433a-9994-84f55f918bee] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01459354s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-742762 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-742762 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-742762 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-742762 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-742762 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dd6b8168-c4cb-47fc-895a-2719b0cc3db2] Pending
helpers_test.go:344: "sp-pod" [dd6b8168-c4cb-47fc-895a-2719b0cc3db2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dd6b8168-c4cb-47fc-895a-2719b0cc3db2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.014506099s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-742762 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-742762 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-742762 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c2a7213c-690f-4aa5-a837-5c73fb9a883c] Pending
helpers_test.go:344: "sp-pod" [c2a7213c-690f-4aa5-a837-5c73fb9a883c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c2a7213c-690f-4aa5-a837-5c73fb9a883c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008593043s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-742762 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh -n functional-742762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 cp functional-742762:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3399888762/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh -n functional-742762 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-742762 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-czpgn" [b75e6102-e2d8-4739-84e1-7f17dcab0399] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-czpgn" [b75e6102-e2d8-4739-84e1-7f17dcab0399] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.011707252s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-742762 exec mysql-7db894d786-czpgn -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-742762 exec mysql-7db894d786-czpgn -- mysql -ppassword -e "show databases;": exit status 1 (181.255748ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-742762 exec mysql-7db894d786-czpgn -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-742762 exec mysql-7db894d786-czpgn -- mysql -ppassword -e "show databases;": exit status 1 (127.728971ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-742762 exec mysql-7db894d786-czpgn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/25485/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "sudo cat /etc/test/nested/copy/25485/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/25485.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "sudo cat /etc/ssl/certs/25485.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/25485.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "sudo cat /usr/share/ca-certificates/25485.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/254852.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "sudo cat /etc/ssl/certs/254852.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/254852.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "sudo cat /usr/share/ca-certificates/254852.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-742762 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "sudo systemctl is-active docker"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-742762 ssh "sudo systemctl is-active docker": exit status 1 (254.738427ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "sudo systemctl is-active containerd"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-742762 ssh "sudo systemctl is-active containerd": exit status 1 (267.696087ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-742762 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-742762 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-9w7mm" [66830cbe-e424-400f-9d2a-bc45701972f6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-9w7mm" [66830cbe-e424-400f-9d2a-bc45701972f6] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.014783539s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image ls --format short --alsologtostderr
functional_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p functional-742762 image ls --format short --alsologtostderr: (1.321191192s)
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-742762 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-742762
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-742762 image ls --format short --alsologtostderr:
I0610 14:10:19.635609   60614 out.go:296] Setting OutFile to fd 1 ...
I0610 14:10:19.635725   60614 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 14:10:19.635734   60614 out.go:309] Setting ErrFile to fd 2...
I0610 14:10:19.635738   60614 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 14:10:19.635864   60614 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
I0610 14:10:19.636413   60614 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0610 14:10:19.636537   60614 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0610 14:10:19.636968   60614 cli_runner.go:164] Run: docker container inspect functional-742762 --format={{.State.Status}}
I0610 14:10:19.652686   60614 ssh_runner.go:195] Run: systemctl --version
I0610 14:10:19.652738   60614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-742762
I0610 14:10:19.677083   60614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/functional-742762/id_rsa Username:docker}
I0610 14:10:19.864319   60614 ssh_runner.go:195] Run: sudo crictl images --output json
I0610 14:10:20.898625   60614 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.034272265s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-742762 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver          | v1.27.2            | c5b13e4f7806d | 122MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/mysql                 | 5.7                | dd6675b5cfea1 | 588MB  |
| docker.io/library/nginx                 | latest             | f9c14fe76d502 | 147MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| docker.io/library/nginx                 | alpine             | fe7edaf8a8dcf | 43.2MB |
| gcr.io/google-containers/addon-resizer  | functional-742762  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-proxy              | v1.27.2            | b8aa50768fd67 | 72.7MB |
| registry.k8s.io/kube-scheduler          | v1.27.2            | 89e70da428d29 | 59.8MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| registry.k8s.io/kube-controller-manager | v1.27.2            | ac2b7465ebba9 | 114MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-742762 image ls --format table --alsologtostderr:
I0610 14:10:21.229392   61177 out.go:296] Setting OutFile to fd 1 ...
I0610 14:10:21.229524   61177 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 14:10:21.229533   61177 out.go:309] Setting ErrFile to fd 2...
I0610 14:10:21.229538   61177 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 14:10:21.229666   61177 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
I0610 14:10:21.230268   61177 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0610 14:10:21.230379   61177 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0610 14:10:21.230763   61177 cli_runner.go:164] Run: docker container inspect functional-742762 --format={{.State.Status}}
I0610 14:10:21.247075   61177 ssh_runner.go:195] Run: systemctl --version
I0610 14:10:21.247125   61177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-742762
I0610 14:10:21.263738   61177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/functional-742762/id_rsa Username:docker}
I0610 14:10:21.351841   61177 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-742762 image ls --format json --alsologtostderr:
[{"id":"ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:279461bc1c0b4753dc83677a927b9f7827012b3adbcaa5df9dfd4af8b0987bc6","registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"113906988"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"dd6675b5cfea17abb655ea8229cbcfa5db9d0b041f839db0c24228c2e18a4bdf","repoDigests":["docker.io/library/mysql@sha256:c4c526804552f6b4e8e124e182f5df4b09bf4bc88cba8a94adbd0a2ccb81dce6","docker.io/library/mysql@sha256:f57eef421000aaf8332a91ab0b6c96b3c83ed2a981c29e6528b21ce10197cd16"],"repoTags":["docker.io/library/mysql:5.7"],"size":"588230308"},{"id":"86b6af7dd652c1b38
118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118c
f84bb0b5f989370","repoDigests":["registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9","registry.k8s.io/kube-apiserver@sha256:95388fe585f1d6f65d414678042a281f80593e78cabaeeb8520a0873ebbb54f2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"122053574"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"fe7edaf8a8dcf9af72f49cf0a0219e3ace17667bafc537f0d4a0ab1bd7f10467","repoDigests":["docker.io/library/nginx@sha256:0b0af14a00ea0e4fd9b09e77d2b89b71b5c5a97f9aa073553f355415bc34ae33","docker.io/library/nginx@sha256:
2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90"],"repoTags":["docker.io/library/nginx:alpine"],"size":"43234868"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-742762"],"size":"34114467"},{"id":"f9c14fe76d502861ba0939bc3189e642c02e257f06f4c0214b1f8ca329326cda","repoDigests":["docker.io/library/nginx@sha256:6b06964cdbbc517102ce5e0cef95152f3c6a7ef703e4057cb574539de91f72e6","docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305"],"repoTags":["docker.io/library/nginx:latest"],"size":"146967160"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e
5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee","repoDigests":["registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f","registry.k8s.io/kube-proxy@sha256:931b8fa2393b7e2a926afbfd24784153760b999ddbf2059f2cb652510ecdef83"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"72709527"},{"id":"89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14
779afe84779083fe05177","registry.k8s.io/kube-scheduler@sha256:f8be7505892d1671a15afa3ac6c3b31e50da87dd59a4745e30a5b3f9f584ee6e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"59802924"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/
dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-742762 image ls --format json --alsologtostderr:
I0610 14:10:20.967580   60937 out.go:296] Setting OutFile to fd 1 ...
I0610 14:10:20.967682   60937 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 14:10:20.967690   60937 out.go:309] Setting ErrFile to fd 2...
I0610 14:10:20.967695   60937 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 14:10:20.967856   60937 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
I0610 14:10:20.968367   60937 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0610 14:10:20.968460   60937 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0610 14:10:20.968848   60937 cli_runner.go:164] Run: docker container inspect functional-742762 --format={{.State.Status}}
I0610 14:10:20.988480   60937 ssh_runner.go:195] Run: systemctl --version
I0610 14:10:20.988530   60937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-742762
I0610 14:10:21.007793   60937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/functional-742762/id_rsa Username:docker}
I0610 14:10:21.110689   60937 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image ls --format yaml --alsologtostderr
functional_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p functional-742762 image ls --format yaml --alsologtostderr: (1.138590763s)
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-742762 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: dd6675b5cfea17abb655ea8229cbcfa5db9d0b041f839db0c24228c2e18a4bdf
repoDigests:
- docker.io/library/mysql@sha256:c4c526804552f6b4e8e124e182f5df4b09bf4bc88cba8a94adbd0a2ccb81dce6
- docker.io/library/mysql@sha256:f57eef421000aaf8332a91ab0b6c96b3c83ed2a981c29e6528b21ce10197cd16
repoTags:
- docker.io/library/mysql:5.7
size: "588230308"
- id: fe7edaf8a8dcf9af72f49cf0a0219e3ace17667bafc537f0d4a0ab1bd7f10467
repoDigests:
- docker.io/library/nginx@sha256:0b0af14a00ea0e4fd9b09e77d2b89b71b5c5a97f9aa073553f355415bc34ae33
- docker.io/library/nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90
repoTags:
- docker.io/library/nginx:alpine
size: "43234868"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177
- registry.k8s.io/kube-scheduler@sha256:f8be7505892d1671a15afa3ac6c3b31e50da87dd59a4745e30a5b3f9f584ee6e
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "59802924"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:279461bc1c0b4753dc83677a927b9f7827012b3adbcaa5df9dfd4af8b0987bc6
- registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "113906988"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-742762
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9
- registry.k8s.io/kube-apiserver@sha256:95388fe585f1d6f65d414678042a281f80593e78cabaeeb8520a0873ebbb54f2
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "122053574"
- id: b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f
- registry.k8s.io/kube-proxy@sha256:931b8fa2393b7e2a926afbfd24784153760b999ddbf2059f2cb652510ecdef83
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "72709527"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: f9c14fe76d502861ba0939bc3189e642c02e257f06f4c0214b1f8ca329326cda
repoDigests:
- docker.io/library/nginx@sha256:6b06964cdbbc517102ce5e0cef95152f3c6a7ef703e4057cb574539de91f72e6
- docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305
repoTags:
- docker.io/library/nginx:latest
size: "146967160"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-742762 image ls --format yaml --alsologtostderr:
I0610 14:10:19.814104   60671 out.go:296] Setting OutFile to fd 1 ...
I0610 14:10:19.814252   60671 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 14:10:19.814263   60671 out.go:309] Setting ErrFile to fd 2...
I0610 14:10:19.814268   60671 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 14:10:19.814433   60671 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
I0610 14:10:19.815026   60671 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0610 14:10:19.815136   60671 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0610 14:10:19.815693   60671 cli_runner.go:164] Run: docker container inspect functional-742762 --format={{.State.Status}}
I0610 14:10:19.834388   60671 ssh_runner.go:195] Run: systemctl --version
I0610 14:10:19.834443   60671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-742762
I0610 14:10:19.849578   60671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/functional-742762/id_rsa Username:docker}
I0610 14:10:19.950116   60671 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-742762 ssh pgrep buildkitd: exit status 1 (357.977433ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image build -t localhost/my-image:functional-742762 testdata/build --alsologtostderr
functional_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p functional-742762 image build -t localhost/my-image:functional-742762 testdata/build --alsologtostderr: (1.942490768s)
functional_test.go:318: (dbg) Stdout: out/minikube-linux-amd64 -p functional-742762 image build -t localhost/my-image:functional-742762 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2c2b5e479c3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-742762
--> f5b9cf9ace7
Successfully tagged localhost/my-image:functional-742762
f5b9cf9ace796d5b8c719def17fd5e849bc615f9f418e0f3b2c9fd636c04d1bd
functional_test.go:321: (dbg) Stderr: out/minikube-linux-amd64 -p functional-742762 image build -t localhost/my-image:functional-742762 testdata/build --alsologtostderr:
I0610 14:10:21.313639   61210 out.go:296] Setting OutFile to fd 1 ...
I0610 14:10:21.313835   61210 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 14:10:21.313863   61210 out.go:309] Setting ErrFile to fd 2...
I0610 14:10:21.313874   61210 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 14:10:21.314010   61210 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
I0610 14:10:21.314559   61210 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0610 14:10:21.315128   61210 config.go:182] Loaded profile config "functional-742762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
I0610 14:10:21.315495   61210 cli_runner.go:164] Run: docker container inspect functional-742762 --format={{.State.Status}}
I0610 14:10:21.336217   61210 ssh_runner.go:195] Run: systemctl --version
I0610 14:10:21.336312   61210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-742762
I0610 14:10:21.355950   61210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/functional-742762/id_rsa Username:docker}
I0610 14:10:21.454678   61210 build_images.go:151] Building image from path: /tmp/build.2897244722.tar
I0610 14:10:21.454726   61210 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0610 14:10:21.462808   61210 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2897244722.tar
I0610 14:10:21.466341   61210 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2897244722.tar: stat -c "%s %y" /var/lib/minikube/build/build.2897244722.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2897244722.tar': No such file or directory
I0610 14:10:21.466374   61210 ssh_runner.go:362] scp /tmp/build.2897244722.tar --> /var/lib/minikube/build/build.2897244722.tar (3072 bytes)
I0610 14:10:21.491169   61210 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2897244722
I0610 14:10:21.499853   61210 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2897244722 -xf /var/lib/minikube/build/build.2897244722.tar
I0610 14:10:21.508073   61210 crio.go:297] Building image: /var/lib/minikube/build/build.2897244722
I0610 14:10:21.508142   61210 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-742762 /var/lib/minikube/build/build.2897244722 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0610 14:10:23.185220   61210 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-742762 /var/lib/minikube/build/build.2897244722 --cgroup-manager=cgroupfs: (1.677056225s)
I0610 14:10:23.185270   61210 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2897244722
I0610 14:10:23.194040   61210 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2897244722.tar
I0610 14:10:23.201044   61210 build_images.go:207] Built localhost/my-image:functional-742762 from /tmp/build.2897244722.tar
I0610 14:10:23.201069   61210 build_images.go:123] succeeded building to: functional-742762
I0610 14:10:23.201079   61210 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.095814441s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-742762
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1313: Took "286.323522ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "47.996661ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1364: Took "275.019295ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "41.86908ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 service list -o json
functional_test.go:1492: Took "315.172682ms" to run "out/minikube-linux-amd64 -p functional-742762 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.49.2:32722
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.49.2:32722
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-742762 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-742762 tunnel --alsologtostderr]
E0610 14:09:58.086913   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-742762 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-742762 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 57333: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-742762 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-742762 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f34825ca-20b6-45e2-8f43-7b00c44a3d60] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f34825ca-20b6-45e2-8f43-7b00c44a3d60] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.007424573s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image load --daemon gcr.io/google-containers/addon-resizer:functional-742762 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p functional-742762 image load --daemon gcr.io/google-containers/addon-resizer:functional-742762 --alsologtostderr: (6.108232781s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.007255727s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-742762
functional_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image load --daemon gcr.io/google-containers/addon-resizer:functional-742762 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p functional-742762 image load --daemon gcr.io/google-containers/addon-resizer:functional-742762 --alsologtostderr: (4.324548229s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image save gcr.io/google-containers/addon-resizer:functional-742762 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image rm gcr.io/google-containers/addon-resizer:functional-742762 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-742762
functional_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 image save --daemon gcr.io/google-containers/addon-resizer:functional-742762 --alsologtostderr
functional_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p functional-742762 image save --daemon gcr.io/google-containers/addon-resizer:functional-742762 --alsologtostderr: (2.160275429s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-742762
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-742762 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.28.134 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-742762 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-742762 /tmp/TestFunctionalparallelMountCmdany-port764562105/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1686406214055735088" to /tmp/TestFunctionalparallelMountCmdany-port764562105/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1686406214055735088" to /tmp/TestFunctionalparallelMountCmdany-port764562105/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1686406214055735088" to /tmp/TestFunctionalparallelMountCmdany-port764562105/001/test-1686406214055735088
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-742762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (242.561261ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 10 14:10 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 10 14:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 10 14:10 test-1686406214055735088
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh cat /mount-9p/test-1686406214055735088
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-742762 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bc5b1124-cafd-41ad-8c79-d29781181ea7] Pending
helpers_test.go:344: "busybox-mount" [bc5b1124-cafd-41ad-8c79-d29781181ea7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bc5b1124-cafd-41ad-8c79-d29781181ea7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bc5b1124-cafd-41ad-8c79-d29781181ea7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006340986s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-742762 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-742762 /tmp/TestFunctionalparallelMountCmdany-port764562105/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-742762 /tmp/TestFunctionalparallelMountCmdspecific-port3146465806/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-742762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (291.678648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-742762 /tmp/TestFunctionalparallelMountCmdspecific-port3146465806/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-742762 ssh "sudo umount -f /mount-9p": exit status 1 (243.556341ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-742762 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-742762 /tmp/TestFunctionalparallelMountCmdspecific-port3146465806/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-742762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup259915722/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-742762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup259915722/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-742762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup259915722/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "findmnt -T" /mount1
2023/06/10 14:10:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-742762 ssh "findmnt -T" /mount1: exit status 1 (282.247413ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-742762 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-742762 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-742762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup259915722/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-742762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup259915722/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-742762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup259915722/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-742762
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-742762
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-742762
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (58.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-889215 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0610 14:11:20.007553   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-889215 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (58.54981819s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (58.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889215 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-889215 addons enable ingress --alsologtostderr -v=5: (10.487563703s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.33s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889215 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.33s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-712737 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0610 14:14:45.882357   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:14:45.887650   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:14:45.897939   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:14:45.918176   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:14:45.958493   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:14:46.038834   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:14:46.199571   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:14:46.520095   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:14:47.161022   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:14:48.441505   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:14:51.002306   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:14:56.122624   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:15:06.362762   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:15:26.843927   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-712737 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m6.725282162s)
--- PASS: TestJSONOutput/start/Command (66.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-712737 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-712737 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-712737 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-712737 --output=json --user=testUser: (5.739199318s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-087993 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-087993 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.617887ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c7fbb186-0d92-4008-a2d0-f5d01987c263","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-087993] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"81f8b62f-b132-49e2-8a21-fd453e0371b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15074"}}
	{"specversion":"1.0","id":"f7b30d80-c1f4-4f6c-bb14-b61a458999f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e2c247c3-04cc-4098-82a2-b9c3516e1007","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig"}}
	{"specversion":"1.0","id":"ec57bdcc-bed7-4479-83a3-094d62590424","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube"}}
	{"specversion":"1.0","id":"7fce19c9-370d-40a7-8126-55e1c119856b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d1b96166-45a6-4e0b-a10e-82652d936e1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"013acb4e-371d-4444-b873-aa51d024d3cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-087993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-087993
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-391309 --network=
E0610 14:16:07.805049   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-391309 --network=: (31.563147094s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-391309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-391309
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-391309: (1.966527024s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.55s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-508306 --network=bridge
E0610 14:16:36.944719   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:16:36.949981   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:16:36.960224   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:16:36.980510   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:16:37.020797   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:16:37.101129   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:16:37.261533   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:16:37.582114   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:16:38.222924   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:16:39.503336   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:16:42.063861   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-508306 --network=bridge: (21.464172601s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-508306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-508306
E0610 14:16:47.184516   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-508306: (1.81819582s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.30s)

                                                
                                    
x
+
TestKicExistingNetwork (26.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-723346 --network=existing-network
E0610 14:16:57.425348   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-723346 --network=existing-network: (24.573331281s)
helpers_test.go:175: Cleaning up "existing-network-723346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-723346
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-723346: (1.850658056s)
--- PASS: TestKicExistingNetwork (26.55s)

                                                
                                    
x
+
TestKicCustomSubnet (26.59s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-196279 --subnet=192.168.60.0/24
E0610 14:17:17.906315   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:17:29.725278   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-196279 --subnet=192.168.60.0/24: (24.578156953s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-196279 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-196279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-196279
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-196279: (1.994689535s)
--- PASS: TestKicCustomSubnet (26.59s)

                                                
                                    
x
+
TestKicStaticIP (24.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-412631 --static-ip=192.168.200.200
E0610 14:17:58.868069   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-412631 --static-ip=192.168.200.200: (22.516405593s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-412631 ip
helpers_test.go:175: Cleaning up "static-ip-412631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-412631
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-412631: (2.028092947s)
--- PASS: TestKicStaticIP (24.66s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (47.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-169362 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-169362 --driver=docker  --container-runtime=crio: (20.100091741s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-172418 --driver=docker  --container-runtime=crio
E0610 14:18:36.156618   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-172418 --driver=docker  --container-runtime=crio: (23.157175939s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-169362
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-172418
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-172418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-172418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-172418: (1.80306733s)
helpers_test.go:175: Cleaning up "first-169362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-169362
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-169362: (1.771301022s)
--- PASS: TestMinikubeProfile (47.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-184990 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-184990 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.757922753s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-184990 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-195597 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-195597 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.628668804s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.21s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-195597 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.21s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-184990 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-184990 --alsologtostderr -v=5: (1.565729494s)
--- PASS: TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-195597 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-195597
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-195597: (1.194422431s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-195597
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-195597: (5.58778416s)
--- PASS: TestMountStart/serial/RestartStopped (6.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-195597 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-007346 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0610 14:19:45.882777   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:20:13.566061   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-007346 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m7.085879717s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-007346 -- rollout status deployment/busybox: (2.199802186s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-6nqgr -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-r6l8p -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-6nqgr -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-r6l8p -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-6nqgr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-007346 -- exec busybox-67b7f59bb-r6l8p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-007346 -v 3 --alsologtostderr
E0610 14:21:36.945496   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-007346 -v 3 --alsologtostderr: (15.709276583s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.25s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp testdata/cp-test.txt multinode-007346:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp multinode-007346:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3233274941/001/cp-test_multinode-007346.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp multinode-007346:/home/docker/cp-test.txt multinode-007346-m02:/home/docker/cp-test_multinode-007346_multinode-007346-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m02 "sudo cat /home/docker/cp-test_multinode-007346_multinode-007346-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp multinode-007346:/home/docker/cp-test.txt multinode-007346-m03:/home/docker/cp-test_multinode-007346_multinode-007346-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m03 "sudo cat /home/docker/cp-test_multinode-007346_multinode-007346-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp testdata/cp-test.txt multinode-007346-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp multinode-007346-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3233274941/001/cp-test_multinode-007346-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp multinode-007346-m02:/home/docker/cp-test.txt multinode-007346:/home/docker/cp-test_multinode-007346-m02_multinode-007346.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346 "sudo cat /home/docker/cp-test_multinode-007346-m02_multinode-007346.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp multinode-007346-m02:/home/docker/cp-test.txt multinode-007346-m03:/home/docker/cp-test_multinode-007346-m02_multinode-007346-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m03 "sudo cat /home/docker/cp-test_multinode-007346-m02_multinode-007346-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp testdata/cp-test.txt multinode-007346-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp multinode-007346-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3233274941/001/cp-test_multinode-007346-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp multinode-007346-m03:/home/docker/cp-test.txt multinode-007346:/home/docker/cp-test_multinode-007346-m03_multinode-007346.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346 "sudo cat /home/docker/cp-test_multinode-007346-m03_multinode-007346.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 cp multinode-007346-m03:/home/docker/cp-test.txt multinode-007346-m02:/home/docker/cp-test_multinode-007346-m03_multinode-007346-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 ssh -n multinode-007346-m02 "sudo cat /home/docker/cp-test_multinode-007346-m03_multinode-007346-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-007346 node stop m03: (1.174390137s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-007346 status: exit status 7 (416.220723ms)

                                                
                                                
-- stdout --
	multinode-007346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-007346-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-007346-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-007346 status --alsologtostderr: exit status 7 (419.965598ms)

                                                
                                                
-- stdout --
	multinode-007346
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-007346-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-007346-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 14:22:01.948398  121928 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:22:01.948537  121928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:22:01.948546  121928 out.go:309] Setting ErrFile to fd 2...
	I0610 14:22:01.948550  121928 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:22:01.948660  121928 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	I0610 14:22:01.948821  121928 out.go:303] Setting JSON to false
	I0610 14:22:01.948846  121928 mustload.go:65] Loading cluster: multinode-007346
	I0610 14:22:01.948958  121928 notify.go:220] Checking for updates...
	I0610 14:22:01.949171  121928 config.go:182] Loaded profile config "multinode-007346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:22:01.949184  121928 status.go:255] checking status of multinode-007346 ...
	I0610 14:22:01.949513  121928 cli_runner.go:164] Run: docker container inspect multinode-007346 --format={{.State.Status}}
	I0610 14:22:01.966414  121928 status.go:330] multinode-007346 host status = "Running" (err=<nil>)
	I0610 14:22:01.966448  121928 host.go:66] Checking if "multinode-007346" exists ...
	I0610 14:22:01.966703  121928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-007346
	I0610 14:22:01.982445  121928 host.go:66] Checking if "multinode-007346" exists ...
	I0610 14:22:01.982741  121928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 14:22:01.982778  121928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346
	I0610 14:22:01.998057  121928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346/id_rsa Username:docker}
	I0610 14:22:02.083001  121928 ssh_runner.go:195] Run: systemctl --version
	I0610 14:22:02.086599  121928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 14:22:02.096200  121928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:22:02.142160  121928 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:56 SystemTime:2023-06-10 14:22:02.133724374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:22:02.142753  121928 kubeconfig.go:92] found "multinode-007346" server: "https://192.168.58.2:8443"
	I0610 14:22:02.142777  121928 api_server.go:166] Checking apiserver status ...
	I0610 14:22:02.142814  121928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 14:22:02.152618  121928 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup
	I0610 14:22:02.160534  121928 api_server.go:182] apiserver freezer: "4:freezer:/docker/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd/crio/crio-6cc0540979c3a741078a04645dc8a28174c1c655ae448667f247327e4fa97d1a"
	I0610 14:22:02.160586  121928 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2e604f00710c118971c75954472bdaf095d7764356672b0ced766cecdc3651dd/crio/crio-6cc0540979c3a741078a04645dc8a28174c1c655ae448667f247327e4fa97d1a/freezer.state
	I0610 14:22:02.167717  121928 api_server.go:204] freezer state: "THAWED"
	I0610 14:22:02.167743  121928 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0610 14:22:02.171986  121928 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0610 14:22:02.172003  121928 status.go:421] multinode-007346 apiserver status = Running (err=<nil>)
	I0610 14:22:02.172013  121928 status.go:257] multinode-007346 status: &{Name:multinode-007346 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 14:22:02.172028  121928 status.go:255] checking status of multinode-007346-m02 ...
	I0610 14:22:02.172241  121928 cli_runner.go:164] Run: docker container inspect multinode-007346-m02 --format={{.State.Status}}
	I0610 14:22:02.188385  121928 status.go:330] multinode-007346-m02 host status = "Running" (err=<nil>)
	I0610 14:22:02.188410  121928 host.go:66] Checking if "multinode-007346-m02" exists ...
	I0610 14:22:02.188683  121928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-007346-m02
	I0610 14:22:02.204199  121928 host.go:66] Checking if "multinode-007346-m02" exists ...
	I0610 14:22:02.204404  121928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 14:22:02.204447  121928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-007346-m02
	I0610 14:22:02.219979  121928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/15074-18675/.minikube/machines/multinode-007346-m02/id_rsa Username:docker}
	I0610 14:22:02.302906  121928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 14:22:02.313744  121928 status.go:257] multinode-007346-m02 status: &{Name:multinode-007346-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 14:22:02.313770  121928 status.go:255] checking status of multinode-007346-m03 ...
	I0610 14:22:02.314047  121928 cli_runner.go:164] Run: docker container inspect multinode-007346-m03 --format={{.State.Status}}
	I0610 14:22:02.329733  121928 status.go:330] multinode-007346-m03 host status = "Stopped" (err=<nil>)
	I0610 14:22:02.329753  121928 status.go:343] host is not running, skipping remaining checks
	I0610 14:22:02.329759  121928 status.go:257] multinode-007346-m03 status: &{Name:multinode-007346-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 node start m03 --alsologtostderr
E0610 14:22:04.630392   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-007346 node start m03 --alsologtostderr: (10.189306144s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (110.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-007346
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-007346
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-007346: (24.671423418s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-007346 --wait=true -v=8 --alsologtostderr
E0610 14:23:36.157028   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-007346 --wait=true -v=8 --alsologtostderr: (1m25.700903641s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-007346
--- PASS: TestMultiNode/serial/RestartKeepsNodes (110.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-007346 node delete m03: (3.992176169s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-007346 stop: (23.611996889s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-007346 status: exit status 7 (75.938863ms)

                                                
                                                
-- stdout --
	multinode-007346
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-007346-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-007346 status --alsologtostderr: exit status 7 (73.517183ms)

                                                
                                                
-- stdout --
	multinode-007346
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-007346-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 14:24:31.832234  132230 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:24:31.832674  132230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:24:31.832702  132230 out.go:309] Setting ErrFile to fd 2...
	I0610 14:24:31.832709  132230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:24:31.832976  132230 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	I0610 14:24:31.833295  132230 out.go:303] Setting JSON to false
	I0610 14:24:31.833424  132230 notify.go:220] Checking for updates...
	I0610 14:24:31.833385  132230 mustload.go:65] Loading cluster: multinode-007346
	I0610 14:24:31.834030  132230 config.go:182] Loaded profile config "multinode-007346": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:24:31.834046  132230 status.go:255] checking status of multinode-007346 ...
	I0610 14:24:31.834448  132230 cli_runner.go:164] Run: docker container inspect multinode-007346 --format={{.State.Status}}
	I0610 14:24:31.850326  132230 status.go:330] multinode-007346 host status = "Stopped" (err=<nil>)
	I0610 14:24:31.850362  132230 status.go:343] host is not running, skipping remaining checks
	I0610 14:24:31.850372  132230 status.go:257] multinode-007346 status: &{Name:multinode-007346 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 14:24:31.850422  132230 status.go:255] checking status of multinode-007346-m02 ...
	I0610 14:24:31.850646  132230 cli_runner.go:164] Run: docker container inspect multinode-007346-m02 --format={{.State.Status}}
	I0610 14:24:31.866662  132230 status.go:330] multinode-007346-m02 host status = "Stopped" (err=<nil>)
	I0610 14:24:31.866677  132230 status.go:343] host is not running, skipping remaining checks
	I0610 14:24:31.866684  132230 status.go:257] multinode-007346-m02 status: &{Name:multinode-007346-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-007346 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0610 14:24:45.882330   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:24:59.209846   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-007346 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m15.520725915s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-007346 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (76.06s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-007346
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-007346-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-007346-m02 --driver=docker  --container-runtime=crio: exit status 14 (62.130767ms)

                                                
                                                
-- stdout --
	* [multinode-007346-m02] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15074
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-007346-m02' is duplicated with machine name 'multinode-007346-m02' in profile 'multinode-007346'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-007346-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-007346-m03 --driver=docker  --container-runtime=crio: (20.88873904s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-007346
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-007346: exit status 80 (241.24585ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-007346
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-007346-m03 already exists in multinode-007346-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-007346-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-007346-m03: (1.775270142s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.01s)

                                                
                                    
x
+
TestPreload (150.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-627374 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0610 14:26:36.945332   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-627374 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m6.720777596s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-627374 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-627374
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-627374: (5.683601326s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-627374 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0610 14:28:36.156721   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-627374 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m15.235479109s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-627374 image list
helpers_test.go:175: Cleaning up "test-preload-627374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-627374
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-627374: (2.216448626s)
--- PASS: TestPreload (150.99s)

                                                
                                    
x
+
TestScheduledStopUnix (98.58s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-080093 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-080093 --memory=2048 --driver=docker  --container-runtime=crio: (23.745378084s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-080093 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-080093 -n scheduled-stop-080093
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-080093 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-080093 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-080093 -n scheduled-stop-080093
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-080093
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-080093 --schedule 15s
E0610 14:29:45.885876   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-080093
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-080093: exit status 7 (55.994442ms)

                                                
                                                
-- stdout --
	scheduled-stop-080093
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-080093 -n scheduled-stop-080093
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-080093 -n scheduled-stop-080093: exit status 7 (54.463368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-080093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-080093
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-080093: (3.678272695s)
--- PASS: TestScheduledStopUnix (98.58s)

                                                
                                    
x
+
TestInsufficientStorage (12.3s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-737980 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-737980 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.04891839s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6f0b5e54-067b-468a-a038-4153887a1b0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-737980] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c5b07e3d-adc2-47bb-b8bb-4e485248beab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15074"}}
	{"specversion":"1.0","id":"a2106504-61f8-4feb-88aa-9cb93c5ccb0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"678fe89b-4924-4c57-bb26-d5378efa708c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig"}}
	{"specversion":"1.0","id":"56892ba2-e019-4fe4-b725-aa858e886837","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube"}}
	{"specversion":"1.0","id":"76a71db7-55a3-4599-adf8-0188d54ced52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4444d146-6f05-4050-9222-fda4a8a8195d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"97cc746b-81ec-4e48-bf35-5bb515777826","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"68f613bb-f485-4f19-88e4-44f04b03a3be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a265cc53-cf13-432f-893d-e44ae14124d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"26611ec0-65ef-4b7d-a86c-d9adfe578808","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"abba35e5-933f-4545-99a7-316d0901a9b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-737980 in cluster insufficient-storage-737980","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d687a7a-edff-4883-86be-4e6e5f590043","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"92c99046-6222-47d1-8b57-e476fccf7ff2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2dfa37ff-f7ee-401e-b1a8-038c165f5120","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-737980 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-737980 --output=json --layout=cluster: exit status 7 (240.758003ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-737980","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-737980","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 14:30:36.180061  153644 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-737980" does not appear in /home/jenkins/minikube-integration/15074-18675/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-737980 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-737980 --output=json --layout=cluster: exit status 7 (232.648891ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-737980","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-737980","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 14:30:36.413029  153734 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-737980" does not appear in /home/jenkins/minikube-integration/15074-18675/kubeconfig
	E0610 14:30:36.421880  153734 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/insufficient-storage-737980/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-737980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-737980
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-737980: (1.77902752s)
--- PASS: TestInsufficientStorage (12.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (355.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553747 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0610 14:32:59.991423   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-553747 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.401156828s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-553747
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-553747: (1.26902482s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-553747 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-553747 status --format={{.Host}}: exit status 7 (64.818114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553747 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-553747 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.368555257s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-553747 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553747 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-553747 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (89.371188ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-553747] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15074
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-553747
	    minikube start -p kubernetes-upgrade-553747 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5537472 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.2, by running:
	    
	    minikube start -p kubernetes-upgrade-553747 --kubernetes-version=v1.27.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553747 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-553747 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.302921116s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-553747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-553747
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-553747: (2.320567396s)
--- PASS: TestKubernetesUpgrade (355.89s)

                                                
                                    
x
+
TestMissingContainerUpgrade (147.55s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.1.4276663778.exe start -p missing-upgrade-634851 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.9.1.4276663778.exe start -p missing-upgrade-634851 --memory=2200 --driver=docker  --container-runtime=crio: (1m11.584502343s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-634851
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-634851: (12.784041757s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-634851
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-634851 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-634851 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m0.340766927s)
helpers_test.go:175: Cleaning up "missing-upgrade-634851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-634851
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-634851: (2.213152832s)
--- PASS: TestMissingContainerUpgrade (147.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-153333 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-153333 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (76.161169ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-153333] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15074
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-153333 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-153333 --driver=docker  --container-runtime=crio: (35.438093364s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-153333 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p false-474189 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-474189 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (180.359074ms)

                                                
                                                
-- stdout --
	* [false-474189] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15074
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 14:30:41.640415  155820 out.go:296] Setting OutFile to fd 1 ...
	I0610 14:30:41.640547  155820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:30:41.640565  155820 out.go:309] Setting ErrFile to fd 2...
	I0610 14:30:41.640576  155820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 14:30:41.640693  155820 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15074-18675/.minikube/bin
	I0610 14:30:41.641292  155820 out.go:303] Setting JSON to false
	I0610 14:30:41.642519  155820 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7997,"bootTime":1686399445,"procs":552,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1035-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 14:30:41.642577  155820 start.go:137] virtualization: kvm guest
	I0610 14:30:41.645416  155820 out.go:177] * [false-474189] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 14:30:41.647849  155820 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 14:30:41.647829  155820 notify.go:220] Checking for updates...
	I0610 14:30:41.650110  155820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 14:30:41.651893  155820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15074-18675/kubeconfig
	I0610 14:30:41.653798  155820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15074-18675/.minikube
	I0610 14:30:41.655471  155820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 14:30:41.657104  155820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 14:30:41.659530  155820 config.go:182] Loaded profile config "NoKubernetes-153333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:30:41.659703  155820 config.go:182] Loaded profile config "force-systemd-env-156040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:30:41.659852  155820 config.go:182] Loaded profile config "offline-crio-126429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.2
	I0610 14:30:41.659997  155820 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 14:30:41.689226  155820 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0610 14:30:41.689379  155820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0610 14:30:41.760457  155820 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:75 SystemTime:2023-06-10 14:30:41.748529665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1035-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660649472 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0610 14:30:41.760586  155820 docker.go:294] overlay module found
	I0610 14:30:41.764064  155820 out.go:177] * Using the docker driver based on user configuration
	I0610 14:30:41.766057  155820 start.go:297] selected driver: docker
	I0610 14:30:41.766078  155820 start.go:875] validating driver "docker" against <nil>
	I0610 14:30:41.766092  155820 start.go:886] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 14:30:41.768971  155820 out.go:177] 
	W0610 14:30:41.770857  155820 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0610 14:30:41.772692  155820 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:86: 
----------------------- debugLogs start: false-474189 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-474189" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-474189" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-474189

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474189"

                                                
                                                
----------------------- debugLogs end: false-474189 [took: 7.820511649s] --------------------------------
helpers_test.go:175: Cleaning up "false-474189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-474189
--- PASS: TestNetworkPlugins/group/false (8.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-153333 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-153333 --no-kubernetes --driver=docker  --container-runtime=crio: (5.782911402s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-153333 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-153333 status -o json: exit status 2 (320.174277ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-153333","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-153333
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-153333: (1.925624848s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-153333 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-153333 --no-kubernetes --driver=docker  --container-runtime=crio: (6.888629822s)
--- PASS: TestNoKubernetes/serial/Start (6.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-153333 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-153333 "sudo systemctl is-active --quiet service kubelet": exit status 1 (237.439602ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-153333
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-153333: (1.197750031s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-153333 --driver=docker  --container-runtime=crio
E0610 14:31:36.945591   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-153333 --driver=docker  --container-runtime=crio: (6.547301671s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-153333 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-153333 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.592513ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-150295
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.61s)

                                                
                                    
x
+
TestPause/serial/Start (71.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-106650 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-106650 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m11.127511425s)
--- PASS: TestPause/serial/Start (71.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m12.362990067s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0610 14:34:45.882284   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m7.75618687s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.08s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-106650 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-106650 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.042919018s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-474189 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-474189 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-pww2b" [d6d8bb53-4078-4049-9fb4-19e39f6207e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-pww2b" [d6d8bb53-4078-4049-9fb4-19e39f6207e1] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006231001s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-474189 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-pfqrn" [a8ae846b-a93b-484a-bc5b-4a9adfd83bf6] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.01501184s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-474189 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-474189 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-jzhgj" [75cd7d88-6d4c-4748-95c7-70670b057c40] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-jzhgj" [75cd7d88-6d4c-4748-95c7-70670b057c40] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006712512s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-474189 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.109401794s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-106650 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-106650 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-106650 --output=json --layout=cluster: exit status 2 (309.065477ms)

                                                
                                                
-- stdout --
	{"Name":"pause-106650","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-106650","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-106650 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-106650 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-106650 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-106650 --alsologtostderr -v=5: (2.753584516s)
--- PASS: TestPause/serial/DeletePaused (2.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (18.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (18.575759464s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-106650
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-106650: exit status 1 (33.936473ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-106650: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (18.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0610 14:36:36.945490   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.572985333s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (41.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (41.953306444s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (41.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-q2wwp" [05954ebb-bb23-41bb-80ef-92a2a8c65cce] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.01841018s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-474189 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-474189 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-zgbtq" [97cc28db-8c9b-4539-9d03-a47e55db5588] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-zgbtq" [97cc28db-8c9b-4539-9d03-a47e55db5588] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.007031249s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-474189 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-474189 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-g5hbh" [02f24e7c-f687-4c78-81e6-615187bfa123] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-g5hbh" [02f24e7c-f687-4c78-81e6-615187bfa123] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.008359793s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-474189 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-474189 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-474189 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-5tnnq" [5cbc397e-88da-44d9-8eff-a3deac81201a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-5tnnq" [5cbc397e-88da-44d9-8eff-a3deac81201a] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.008763024s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-474189 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-474189 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m0.096597659s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-474189 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (40.230208699s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (123.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-696705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-696705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m3.493938611s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (123.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (60.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-988177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-988177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m0.396472055s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (60.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-474189 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-474189 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-qdg45" [f48baf38-5881-448b-b6f6-0358fbe0265c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0610 14:38:36.156599   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-qdg45" [f48baf38-5881-448b-b6f6-0358fbe0265c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.008017479s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-474189 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-d7lr9" [5e9c9f2d-8137-42f7-920b-fd52ff2b52db] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.016974239s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-474189 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-474189 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-chnph" [a300b0fc-3a49-4954-8874-9976ce567062] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-chnph" [a300b0fc-3a49-4954-8874-9976ce567062] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006962806s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-474189 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-474189 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-352707 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-352707 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m7.88773524s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-988177 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [98b77fa0-b735-458f-93ae-d9cbcc507fe5] Pending
helpers_test.go:344: "busybox" [98b77fa0-b735-458f-93ae-d9cbcc507fe5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [98b77fa0-b735-458f-93ae-d9cbcc507fe5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.03791619s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-988177 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-988177 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-988177 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-988177 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-988177 --alsologtostderr -v=3: (11.965264268s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-224879 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-224879 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (1m7.487892999s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-988177 -n no-preload-988177
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-988177 -n no-preload-988177: exit status 7 (66.33514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-988177 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (340.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-988177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0610 14:39:45.882645   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-988177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (5m39.922483505s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-988177 -n no-preload-988177
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (340.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-696705 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cae47d45-71f5-4100-bc0c-0f7020df1461] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cae47d45-71f5-4100-bc0c-0f7020df1461] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.012059154s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-696705 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-696705 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-696705 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-696705 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-696705 --alsologtostderr -v=3: (11.973100951s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-352707 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2166fcee-b212-4968-938d-062efa07a92f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2166fcee-b212-4968-938d-062efa07a92f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.030603726s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-352707 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696705 -n old-k8s-version-696705
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696705 -n old-k8s-version-696705: exit status 7 (59.026166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-696705 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (451.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-696705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-696705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m31.185471738s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-696705 -n old-k8s-version-696705
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (451.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-352707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-352707 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (14.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-352707 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-352707 --alsologtostderr -v=3: (14.510489759s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (14.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-224879 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2e0e7ac1-8efc-4b80-b768-ed0102ce5498] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2e0e7ac1-8efc-4b80-b768-ed0102ce5498] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.012630856s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-224879 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-352707 -n embed-certs-352707
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-352707 -n embed-certs-352707: exit status 7 (57.511997ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-352707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (340.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-352707 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-352707 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (5m40.36371868s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-352707 -n embed-certs-352707
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (340.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-224879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-224879 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-224879 --alsologtostderr -v=3
E0610 14:40:40.177681   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:40:40.182918   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:40:40.193150   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:40:40.213384   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:40:40.253820   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:40:40.334534   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:40:40.495616   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:40:40.816333   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:40:41.456431   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:40:42.736799   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:40:45.297635   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:40:50.418318   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-224879 --alsologtostderr -v=3: (12.064587712s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-224879 -n default-k8s-diff-port-224879
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-224879 -n default-k8s-diff-port-224879: exit status 7 (58.000008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-224879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (340.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-224879 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0610 14:40:52.533871   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:40:52.539116   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:40:52.549349   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:40:52.569569   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:40:52.609825   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:40:52.690170   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:40:52.850586   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:40:53.171380   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:40:53.811677   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:40:55.092299   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:40:57.652922   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:41:00.658771   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:41:02.773416   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:41:13.013853   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:41:21.139584   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:41:33.494314   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:41:36.945657   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
E0610 14:41:39.209962   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:42:02.100226   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:42:10.685649   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:10.690934   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:10.701193   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:10.721435   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:10.761672   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:10.842002   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:11.002351   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:11.323424   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:11.964317   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:13.244713   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:14.455075   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:42:15.805589   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:20.926275   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:22.653367   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:22.658606   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:22.668867   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:22.689113   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:22.729340   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:22.809898   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:22.970281   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:23.290827   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:23.931436   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:25.212299   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:25.902863   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:25.908075   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:25.918290   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:25.938642   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:25.978876   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:26.059202   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:26.219822   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:26.540402   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:27.180545   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:27.773263   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:28.460910   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:31.021177   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:31.166906   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:42:32.894191   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:36.142119   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:43.134432   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:42:46.383075   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:42:51.647827   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:43:03.614759   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:43:06.863193   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:43:24.020821   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:43:32.608555   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:43:35.921766   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:35.927015   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:35.937251   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:35.957464   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:35.997698   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:36.078004   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:36.157247   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/addons-060929/client.crt: no such file or directory
E0610 14:43:36.238637   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:36.375918   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
E0610 14:43:36.559346   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:37.200112   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:38.480431   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:41.041162   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:44.574931   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:43:46.161656   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:46.557136   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:43:46.562412   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:43:46.572698   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:43:46.592963   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:43:46.633183   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:43:46.713486   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:43:46.873846   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:43:47.194243   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:43:47.823703   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
E0610 14:43:47.834933   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:43:49.115386   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:43:51.676262   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:43:56.402315   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:43:56.796922   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:44:07.037922   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:44:16.882507   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:44:27.518830   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:44:45.882193   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/functional-742762/client.crt: no such file or directory
E0610 14:44:54.529440   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/calico-474189/client.crt: no such file or directory
E0610 14:44:57.842923   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
E0610 14:45:06.496055   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/custom-flannel-474189/client.crt: no such file or directory
E0610 14:45:08.478975   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
E0610 14:45:09.744172   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-224879 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (5m39.816135123s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-224879 -n default-k8s-diff-port-224879
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (340.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lfcl2" [b8e6f335-519d-475d-98f0-b4e5c0d222da] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lfcl2" [b8e6f335-519d-475d-98f0-b4e5c0d222da] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.015817604s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lfcl2" [b8e6f335-519d-475d-98f0-b4e5c0d222da] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006497674s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-988177 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-988177 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-988177 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-988177 -n no-preload-988177
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-988177 -n no-preload-988177: exit status 2 (265.848143ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-988177 -n no-preload-988177
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-988177 -n no-preload-988177: exit status 2 (264.152054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-988177 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-988177 -n no-preload-988177
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-988177 -n no-preload-988177
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-193165 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0610 14:45:40.177435   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
E0610 14:45:52.534124   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-193165 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (34.120394072s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-193165 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-193165 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-193165 --alsologtostderr -v=3: (1.238931816s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-193165 -n newest-cni-193165
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-193165 -n newest-cni-193165: exit status 7 (65.740656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-193165 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (27.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-193165 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2
E0610 14:46:07.861154   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/auto-474189/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-193165 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.2: (26.888063129s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-193165 -n newest-cni-193165
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (27.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6ms46" [d7eef82c-39f6-488f-99dd-633fa29af211] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0610 14:46:19.764040   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/bridge-474189/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6ms46" [d7eef82c-39f6-488f-99dd-633fa29af211] Running
E0610 14:46:20.216373   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/kindnet-474189/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.025690127s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6ms46" [d7eef82c-39f6-488f-99dd-633fa29af211] Running
E0610 14:46:30.399938   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/flannel-474189/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008573057s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-352707 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-352707 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-jzbp2" [40e22507-fb00-4e7a-b806-aa5e2a7c6911] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-jzbp2" [40e22507-fb00-4e7a-b806-aa5e2a7c6911] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.039929574s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-352707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-352707 -n embed-certs-352707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-352707 -n embed-certs-352707: exit status 2 (349.661512ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-352707 -n embed-certs-352707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-352707 -n embed-certs-352707: exit status 2 (356.912538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-352707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-352707 -n embed-certs-352707
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-352707 -n embed-certs-352707
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-193165 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-193165 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-193165 -n newest-cni-193165
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-193165 -n newest-cni-193165: exit status 2 (303.544351ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-193165 -n newest-cni-193165
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-193165 -n newest-cni-193165: exit status 2 (281.998389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-193165 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-193165 -n newest-cni-193165
E0610 14:46:36.944915   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/ingress-addon-legacy-889215/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-193165 -n newest-cni-193165
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-jzbp2" [40e22507-fb00-4e7a-b806-aa5e2a7c6911] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006710934s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-224879 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-224879 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-224879 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-224879 -n default-k8s-diff-port-224879
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-224879 -n default-k8s-diff-port-224879: exit status 2 (254.678965ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-224879 -n default-k8s-diff-port-224879
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-224879 -n default-k8s-diff-port-224879: exit status 2 (254.338399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-224879 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-224879 -n default-k8s-diff-port-224879
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-224879 -n default-k8s-diff-port-224879
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-w6zdf" [17a73813-0feb-4123-9254-519abb8b123a] Running
E0610 14:47:53.585353   25485 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15074-18675/.minikube/profiles/enable-default-cni-474189/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012487992s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-w6zdf" [17a73813-0feb-4123-9254-519abb8b123a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005506326s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-696705 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-696705 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-696705 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-696705 -n old-k8s-version-696705
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-696705 -n old-k8s-version-696705: exit status 2 (257.905782ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-696705 -n old-k8s-version-696705
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-696705 -n old-k8s-version-696705: exit status 2 (254.244647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-696705 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-696705 -n old-k8s-version-696705
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-696705 -n old-k8s-version-696705
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.39s)

                                                
                                    

Test skip (23/302)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:458: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:92: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-474189 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-474189" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-474189" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-474189

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474189"

                                                
                                                
----------------------- debugLogs end: kubenet-474189 [took: 3.208404291s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-474189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-474189
--- SKIP: TestNetworkPlugins/group/kubenet (3.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-474189 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-474189" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-474189

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-474189" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474189"

                                                
                                                
----------------------- debugLogs end: cilium-474189 [took: 3.41109432s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-474189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-474189
--- SKIP: TestNetworkPlugins/group/cilium (3.54s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-513770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-513770
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard