Test Report: Docker_Linux_crio 16761

                    
                      ca7e0fd59d4571f4bf5c8ef52ccb5634a88f3699:2023-06-26:29886
                    
                

Test fail (6/303)

Order failed test Duration
25 TestAddons/parallel/Ingress 161.61
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 12.25
153 TestIngressAddonLegacy/serial/ValidateIngressAddons 179.36
203 TestMultiNode/serial/PingHostFrom2Pods 3.43
224 TestRunningBinaryUpgrade 69.84
232 TestStoppedBinaryUpgrade/Upgrade 111.91
x
+
TestAddons/parallel/Ingress (161.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-052687 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-052687 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-052687 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4d115a0d-916b-4bbb-b892-0c49fe2ce829] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4d115a0d-916b-4bbb-b892-0c49fe2ce829] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 20.008078661s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-052687 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-052687 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.486696623s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-052687 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-052687 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-052687 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-052687 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-052687 addons disable ingress --alsologtostderr -v=1: (7.423070634s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-052687
helpers_test.go:235: (dbg) docker inspect addons-052687:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe4be2990cc58d40bf4cb72a7ff6539b16c944e7e011e89d013cdeda72bb7060",
	        "Created": "2023-06-26T18:26:40.724044515Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 338514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-26T18:26:41.00435847Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:42a2b4e0d52aa58abe36e9abb680d93c11444dcb07814b595a45d2fa0f8a777c",
	        "ResolvConfPath": "/var/lib/docker/containers/fe4be2990cc58d40bf4cb72a7ff6539b16c944e7e011e89d013cdeda72bb7060/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe4be2990cc58d40bf4cb72a7ff6539b16c944e7e011e89d013cdeda72bb7060/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe4be2990cc58d40bf4cb72a7ff6539b16c944e7e011e89d013cdeda72bb7060/hosts",
	        "LogPath": "/var/lib/docker/containers/fe4be2990cc58d40bf4cb72a7ff6539b16c944e7e011e89d013cdeda72bb7060/fe4be2990cc58d40bf4cb72a7ff6539b16c944e7e011e89d013cdeda72bb7060-json.log",
	        "Name": "/addons-052687",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-052687:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-052687",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b694513696d90edbdbc6f2ddfcd94d026b41603f7f14cdf7731a9c4cb1649d4a-init/diff:/var/lib/docker/overlay2/8f9a4266fd693ed66b9874436fe49dcae15615f8bcd132a5a8e8ba2403f6ef40/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b694513696d90edbdbc6f2ddfcd94d026b41603f7f14cdf7731a9c4cb1649d4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b694513696d90edbdbc6f2ddfcd94d026b41603f7f14cdf7731a9c4cb1649d4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b694513696d90edbdbc6f2ddfcd94d026b41603f7f14cdf7731a9c4cb1649d4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-052687",
	                "Source": "/var/lib/docker/volumes/addons-052687/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-052687",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-052687",
	                "name.minikube.sigs.k8s.io": "addons-052687",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "73ba6986670accad4a0e8aad743a8f32e92ab08be660b1431c44a9909a453210",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/73ba6986670a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-052687": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "fe4be2990cc5",
	                        "addons-052687"
	                    ],
	                    "NetworkID": "cbb5541e30ac592bd39d1514746ebeb67ce0c7bae3871d5bf8cf391e3e35ea49",
	                    "EndpointID": "f7a869dd2664b41a2f2f65e752c9d89a9c16772142165c7ae3cb884339cf6c42",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-052687 -n addons-052687
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-052687 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-052687 logs -n 25: (1.14906211s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-015118   | jenkins | v1.30.1 | 26 Jun 23 18:25 UTC |                     |
	|         | -p download-only-015118        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-015118   | jenkins | v1.30.1 | 26 Jun 23 18:25 UTC |                     |
	|         | -p download-only-015118        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 26 Jun 23 18:26 UTC | 26 Jun 23 18:26 UTC |
	| delete  | -p download-only-015118        | download-only-015118   | jenkins | v1.30.1 | 26 Jun 23 18:26 UTC | 26 Jun 23 18:26 UTC |
	| delete  | -p download-only-015118        | download-only-015118   | jenkins | v1.30.1 | 26 Jun 23 18:26 UTC | 26 Jun 23 18:26 UTC |
	| start   | --download-only -p             | download-docker-097588 | jenkins | v1.30.1 | 26 Jun 23 18:26 UTC |                     |
	|         | download-docker-097588         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-097588      | download-docker-097588 | jenkins | v1.30.1 | 26 Jun 23 18:26 UTC | 26 Jun 23 18:26 UTC |
	| start   | --download-only -p             | binary-mirror-877393   | jenkins | v1.30.1 | 26 Jun 23 18:26 UTC |                     |
	|         | binary-mirror-877393           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32817         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-877393        | binary-mirror-877393   | jenkins | v1.30.1 | 26 Jun 23 18:26 UTC | 26 Jun 23 18:26 UTC |
	| start   | -p addons-052687               | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:26 UTC | 26 Jun 23 18:28 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	|         | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:28 UTC | 26 Jun 23 18:28 UTC |
	|         | addons-052687                  |                        |         |         |                     |                     |
	| addons  | addons-052687 addons           | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:28 UTC | 26 Jun 23 18:28 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:28 UTC | 26 Jun 23 18:28 UTC |
	|         | -p addons-052687               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-052687 addons disable   | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:28 UTC | 26 Jun 23 18:28 UTC |
	|         | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| ip      | addons-052687 ip               | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:28 UTC | 26 Jun 23 18:28 UTC |
	| addons  | addons-052687 addons disable   | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:28 UTC | 26 Jun 23 18:28 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:28 UTC | 26 Jun 23 18:29 UTC |
	|         | addons-052687                  |                        |         |         |                     |                     |
	| ssh     | addons-052687 ssh curl -s      | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:29 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-052687 addons           | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:29 UTC | 26 Jun 23 18:29 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-052687 addons           | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:29 UTC | 26 Jun 23 18:29 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-052687 ip               | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:31 UTC | 26 Jun 23 18:31 UTC |
	| addons  | addons-052687 addons disable   | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:31 UTC | 26 Jun 23 18:31 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-052687 addons disable   | addons-052687          | jenkins | v1.30.1 | 26 Jun 23 18:31 UTC | 26 Jun 23 18:31 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 18:26:19
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 18:26:19.241614  337862 out.go:296] Setting OutFile to fd 1 ...
	I0626 18:26:19.241737  337862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:26:19.241745  337862 out.go:309] Setting ErrFile to fd 2...
	I0626 18:26:19.241749  337862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:26:19.241863  337862 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	I0626 18:26:19.242458  337862 out.go:303] Setting JSON to false
	I0626 18:26:19.243901  337862 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4129,"bootTime":1687799850,"procs":859,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 18:26:19.243964  337862 start.go:137] virtualization: kvm guest
	I0626 18:26:19.278534  337862 out.go:177] * [addons-052687] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 18:26:19.340407  337862 notify.go:220] Checking for updates...
	I0626 18:26:19.371923  337862 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 18:26:19.434859  337862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 18:26:19.508188  337862 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:26:19.571578  337862 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	I0626 18:26:19.633810  337862 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 18:26:19.696030  337862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 18:26:19.759209  337862 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 18:26:19.779987  337862 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0626 18:26:19.780133  337862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:26:19.828989  337862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-06-26 18:26:19.81986903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:26:19.829116  337862 docker.go:294] overlay module found
	I0626 18:26:19.905849  337862 out.go:177] * Using the docker driver based on user configuration
	I0626 18:26:19.977641  337862 start.go:297] selected driver: docker
	I0626 18:26:19.977677  337862 start.go:954] validating driver "docker" against <nil>
	I0626 18:26:19.977695  337862 start.go:965] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 18:26:19.978502  337862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:26:20.024493  337862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-06-26 18:26:20.016531839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:26:20.024648  337862 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0626 18:26:20.024900  337862 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 18:26:20.072069  337862 out.go:177] * Using Docker driver with root privileges
	I0626 18:26:20.144767  337862 cni.go:84] Creating CNI manager for ""
	I0626 18:26:20.144811  337862 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0626 18:26:20.144822  337862 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0626 18:26:20.144837  337862 start_flags.go:319] config:
	{Name:addons-052687 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-052687 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:26:20.206476  337862 out.go:177] * Starting control plane node addons-052687 in cluster addons-052687
	I0626 18:26:20.270462  337862 cache.go:122] Beginning downloading kic base image for docker with crio
	I0626 18:26:20.342812  337862 out.go:177] * Pulling base image ...
	I0626 18:26:20.406378  337862 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 18:26:20.406444  337862 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon
	I0626 18:26:20.406468  337862 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 18:26:20.406480  337862 cache.go:57] Caching tarball of preloaded images
	I0626 18:26:20.406601  337862 preload.go:174] Found /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 18:26:20.406615  337862 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 18:26:20.407032  337862 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/config.json ...
	I0626 18:26:20.407062  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/config.json: {Name:mk716b43a3f72f6d47bf11ab5c35c5af70fa43b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:20.423275  337862 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 to local cache
	I0626 18:26:20.423381  337862 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local cache directory
	I0626 18:26:20.423397  337862 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local cache directory, skipping pull
	I0626 18:26:20.423401  337862 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 exists in cache, skipping pull
	I0626 18:26:20.423407  337862 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 as a tarball
	I0626 18:26:20.423414  337862 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 from local cache
	I0626 18:26:31.934035  337862 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 from cached tarball
	I0626 18:26:31.934077  337862 cache.go:195] Successfully downloaded all kic artifacts
	I0626 18:26:31.934121  337862 start.go:365] acquiring machines lock for addons-052687: {Name:mk21a0dff683ce27d6d2e81cab72b4ebe70af962 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:26:31.934229  337862 start.go:369] acquired machines lock for "addons-052687" in 89.981µs
	I0626 18:26:31.934254  337862 start.go:93] Provisioning new machine with config: &{Name:addons-052687 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-052687 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 18:26:31.934372  337862 start.go:125] createHost starting for "" (driver="docker")
	I0626 18:26:31.936412  337862 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0626 18:26:31.936662  337862 start.go:159] libmachine.API.Create for "addons-052687" (driver="docker")
	I0626 18:26:31.936694  337862 client.go:168] LocalClient.Create starting
	I0626 18:26:31.936790  337862 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem
	I0626 18:26:32.035140  337862 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem
	I0626 18:26:32.190114  337862 cli_runner.go:164] Run: docker network inspect addons-052687 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0626 18:26:32.205600  337862 cli_runner.go:211] docker network inspect addons-052687 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0626 18:26:32.205676  337862 network_create.go:281] running [docker network inspect addons-052687] to gather additional debugging logs...
	I0626 18:26:32.205698  337862 cli_runner.go:164] Run: docker network inspect addons-052687
	W0626 18:26:32.220447  337862 cli_runner.go:211] docker network inspect addons-052687 returned with exit code 1
	I0626 18:26:32.220479  337862 network_create.go:284] error running [docker network inspect addons-052687]: docker network inspect addons-052687: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-052687 not found
	I0626 18:26:32.220492  337862 network_create.go:286] output of [docker network inspect addons-052687]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-052687 not found
	
	** /stderr **
	I0626 18:26:32.220543  337862 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0626 18:26:32.235855  337862 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016967c0}
	I0626 18:26:32.235902  337862 network_create.go:123] attempt to create docker network addons-052687 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0626 18:26:32.235955  337862 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-052687 addons-052687
	I0626 18:26:32.284625  337862 network_create.go:107] docker network addons-052687 192.168.49.0/24 created
	I0626 18:26:32.284669  337862 kic.go:117] calculated static IP "192.168.49.2" for the "addons-052687" container
	I0626 18:26:32.284746  337862 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0626 18:26:32.299264  337862 cli_runner.go:164] Run: docker volume create addons-052687 --label name.minikube.sigs.k8s.io=addons-052687 --label created_by.minikube.sigs.k8s.io=true
	I0626 18:26:32.316138  337862 oci.go:103] Successfully created a docker volume addons-052687
	I0626 18:26:32.316226  337862 cli_runner.go:164] Run: docker run --rm --name addons-052687-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-052687 --entrypoint /usr/bin/test -v addons-052687:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -d /var/lib
	I0626 18:26:35.827992  337862 cli_runner.go:217] Completed: docker run --rm --name addons-052687-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-052687 --entrypoint /usr/bin/test -v addons-052687:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -d /var/lib: (3.511691511s)
	I0626 18:26:35.828028  337862 oci.go:107] Successfully prepared a docker volume addons-052687
	I0626 18:26:35.828046  337862 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 18:26:35.828068  337862 kic.go:190] Starting extracting preloaded images to volume ...
	I0626 18:26:35.828127  337862 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-052687:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -I lz4 -xf /preloaded.tar -C /extractDir
	I0626 18:26:40.664928  337862 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-052687:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -I lz4 -xf /preloaded.tar -C /extractDir: (4.836750305s)
	I0626 18:26:40.664966  337862 kic.go:199] duration metric: took 4.836891 seconds to extract preloaded images to volume
	W0626 18:26:40.665122  337862 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0626 18:26:40.665242  337862 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0626 18:26:40.709981  337862 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-052687 --name addons-052687 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-052687 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-052687 --network addons-052687 --ip 192.168.49.2 --volume addons-052687:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953
	I0626 18:26:41.012004  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Running}}
	I0626 18:26:41.029118  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:26:41.046895  337862 cli_runner.go:164] Run: docker exec addons-052687 stat /var/lib/dpkg/alternatives/iptables
	I0626 18:26:41.092507  337862 oci.go:144] the created container "addons-052687" has a running status.
	I0626 18:26:41.092544  337862 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa...
	I0626 18:26:41.305612  337862 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0626 18:26:41.328939  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:26:41.350953  337862 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0626 18:26:41.350984  337862 kic_runner.go:114] Args: [docker exec --privileged addons-052687 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0626 18:26:41.420211  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:26:41.442464  337862 machine.go:88] provisioning docker machine ...
	I0626 18:26:41.442514  337862 ubuntu.go:169] provisioning hostname "addons-052687"
	I0626 18:26:41.442596  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:26:41.459646  337862 main.go:141] libmachine: Using SSH client type: native
	I0626 18:26:41.460054  337862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33082 <nil> <nil>}
	I0626 18:26:41.460070  337862 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-052687 && echo "addons-052687" | sudo tee /etc/hostname
	I0626 18:26:41.680042  337862 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-052687
	
	I0626 18:26:41.680132  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:26:41.697722  337862 main.go:141] libmachine: Using SSH client type: native
	I0626 18:26:41.698299  337862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33082 <nil> <nil>}
	I0626 18:26:41.698327  337862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-052687' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-052687/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-052687' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 18:26:41.828853  337862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 18:26:41.828906  337862 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16761-330054/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-330054/.minikube}
	I0626 18:26:41.828931  337862 ubuntu.go:177] setting up certificates
	I0626 18:26:41.828940  337862 provision.go:83] configureAuth start
	I0626 18:26:41.828990  337862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-052687
	I0626 18:26:41.844518  337862 provision.go:138] copyHostCerts
	I0626 18:26:41.844593  337862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem (1082 bytes)
	I0626 18:26:41.844705  337862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem (1123 bytes)
	I0626 18:26:41.844758  337862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem (1679 bytes)
	I0626 18:26:41.844840  337862 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem org=jenkins.addons-052687 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-052687]
	I0626 18:26:42.065382  337862 provision.go:172] copyRemoteCerts
	I0626 18:26:42.065449  337862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 18:26:42.065485  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:26:42.082144  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:26:42.177257  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0626 18:26:42.198285  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0626 18:26:42.218786  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 18:26:42.239095  337862 provision.go:86] duration metric: configureAuth took 410.13044ms
	I0626 18:26:42.239121  337862 ubuntu.go:193] setting minikube options for container-runtime
	I0626 18:26:42.239307  337862 config.go:182] Loaded profile config "addons-052687": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:26:42.239417  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:26:42.255490  337862 main.go:141] libmachine: Using SSH client type: native
	I0626 18:26:42.255881  337862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33082 <nil> <nil>}
	I0626 18:26:42.255898  337862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 18:26:42.472493  337862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 18:26:42.472524  337862 machine.go:91] provisioned docker machine in 1.030035402s
	I0626 18:26:42.472533  337862 client.go:171] LocalClient.Create took 10.535835254s
	I0626 18:26:42.472554  337862 start.go:167] duration metric: libmachine.API.Create for "addons-052687" took 10.535893784s
	I0626 18:26:42.472564  337862 start.go:300] post-start starting for "addons-052687" (driver="docker")
	I0626 18:26:42.472576  337862 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 18:26:42.472652  337862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 18:26:42.472697  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:26:42.488955  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:26:42.581712  337862 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 18:26:42.584826  337862 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0626 18:26:42.584885  337862 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0626 18:26:42.584901  337862 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0626 18:26:42.584910  337862 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0626 18:26:42.584926  337862 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/addons for local assets ...
	I0626 18:26:42.584991  337862 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/files for local assets ...
	I0626 18:26:42.585024  337862 start.go:303] post-start completed in 112.452557ms
	I0626 18:26:42.585328  337862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-052687
	I0626 18:26:42.600928  337862 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/config.json ...
	I0626 18:26:42.601172  337862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0626 18:26:42.601212  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:26:42.617256  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:26:42.705782  337862 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0626 18:26:42.709831  337862 start.go:128] duration metric: createHost completed in 10.775443885s
	I0626 18:26:42.709859  337862 start.go:83] releasing machines lock for "addons-052687", held for 10.77561957s
	I0626 18:26:42.709916  337862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-052687
	I0626 18:26:42.725909  337862 ssh_runner.go:195] Run: cat /version.json
	I0626 18:26:42.725975  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:26:42.725910  337862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 18:26:42.726091  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:26:42.741987  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:26:42.742308  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:26:42.828806  337862 ssh_runner.go:195] Run: systemctl --version
	I0626 18:26:42.922297  337862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 18:26:43.059831  337862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0626 18:26:43.063997  337862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 18:26:43.081252  337862 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0626 18:26:43.081330  337862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 18:26:43.108090  337862 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0626 18:26:43.108113  337862 start.go:466] detecting cgroup driver to use...
	I0626 18:26:43.108148  337862 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0626 18:26:43.108188  337862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 18:26:43.121993  337862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 18:26:43.131541  337862 docker.go:196] disabling cri-docker service (if available) ...
	I0626 18:26:43.131596  337862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 18:26:43.143370  337862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 18:26:43.155366  337862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 18:26:43.227147  337862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 18:26:43.304327  337862 docker.go:212] disabling docker service ...
	I0626 18:26:43.304398  337862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 18:26:43.322306  337862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 18:26:43.332487  337862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 18:26:43.403159  337862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 18:26:43.486912  337862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 18:26:43.496724  337862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 18:26:43.510665  337862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 18:26:43.510729  337862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:26:43.519007  337862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 18:26:43.519082  337862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:26:43.527408  337862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:26:43.535765  337862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:26:43.544220  337862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 18:26:43.551995  337862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 18:26:43.559282  337862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 18:26:43.566557  337862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 18:26:43.638880  337862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 18:26:43.749312  337862 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 18:26:43.749377  337862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 18:26:43.752570  337862 start.go:534] Will wait 60s for crictl version
	I0626 18:26:43.752623  337862 ssh_runner.go:195] Run: which crictl
	I0626 18:26:43.755579  337862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 18:26:43.787900  337862 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0626 18:26:43.787983  337862 ssh_runner.go:195] Run: crio --version
	I0626 18:26:43.821304  337862 ssh_runner.go:195] Run: crio --version
	I0626 18:26:43.856353  337862 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0626 18:26:43.857943  337862 cli_runner.go:164] Run: docker network inspect addons-052687 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0626 18:26:43.874004  337862 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0626 18:26:43.877518  337862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 18:26:43.887347  337862 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 18:26:43.887397  337862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 18:26:43.935406  337862 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 18:26:43.935429  337862 crio.go:415] Images already preloaded, skipping extraction
	I0626 18:26:43.935470  337862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 18:26:43.968331  337862 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 18:26:43.968353  337862 cache_images.go:84] Images are preloaded, skipping loading
	I0626 18:26:43.968414  337862 ssh_runner.go:195] Run: crio config
	I0626 18:26:44.008306  337862 cni.go:84] Creating CNI manager for ""
	I0626 18:26:44.008331  337862 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0626 18:26:44.008346  337862 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 18:26:44.008368  337862 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-052687 NodeName:addons-052687 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 18:26:44.008530  337862 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-052687"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 18:26:44.008615  337862 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-052687 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-052687 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 18:26:44.008678  337862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 18:26:44.016677  337862 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 18:26:44.016731  337862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 18:26:44.024336  337862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0626 18:26:44.039784  337862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 18:26:44.055030  337862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0626 18:26:44.070390  337862 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0626 18:26:44.073412  337862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 18:26:44.082698  337862 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687 for IP: 192.168.49.2
	I0626 18:26:44.082728  337862 certs.go:190] acquiring lock for shared ca certs: {Name:mk5dcd9e05f1fa507f67df494d102e50ef2554ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:44.082834  337862 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key
	I0626 18:26:44.372052  337862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt ...
	I0626 18:26:44.372086  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt: {Name:mk0fb83d08bb54dc42f4ee9fdb2444a4e0866f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:44.372261  337862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key ...
	I0626 18:26:44.372273  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key: {Name:mk54796d0290be5285a7e94ee22ff5407f0b67d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:44.372342  337862 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key
	I0626 18:26:44.499219  337862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.crt ...
	I0626 18:26:44.499256  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.crt: {Name:mk99c9ac1b5dca845eaca64c33425e365c86ef2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:44.499467  337862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key ...
	I0626 18:26:44.499486  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key: {Name:mkf07c8f73dffdf591feda1398c9552ede85f2d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:44.499622  337862 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.key
	I0626 18:26:44.499638  337862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt with IP's: []
	I0626 18:26:44.728123  337862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt ...
	I0626 18:26:44.728161  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: {Name:mkeba08d1703e15f04e8da5447762c5b451595ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:44.728363  337862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.key ...
	I0626 18:26:44.728379  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.key: {Name:mkb1739e933ce357544cba1239325946a5816f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:44.728473  337862 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.key.dd3b5fb2
	I0626 18:26:44.728494  337862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0626 18:26:44.912554  337862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.crt.dd3b5fb2 ...
	I0626 18:26:44.912590  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.crt.dd3b5fb2: {Name:mk08b03abebb74d3e10d69bbf3f36dd1f29f1e8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:44.912796  337862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.key.dd3b5fb2 ...
	I0626 18:26:44.912813  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.key.dd3b5fb2: {Name:mk19924377f7d62ab2cff28633c5eb0c81e93e16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:44.912934  337862 certs.go:337] copying /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.crt
	I0626 18:26:44.913038  337862 certs.go:341] copying /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.key
	I0626 18:26:44.913087  337862 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/proxy-client.key
	I0626 18:26:44.913105  337862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/proxy-client.crt with IP's: []
	I0626 18:26:45.329765  337862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/proxy-client.crt ...
	I0626 18:26:45.329803  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/proxy-client.crt: {Name:mk4d48e72839d6891fec57851bd4865e67892bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:45.329984  337862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/proxy-client.key ...
	I0626 18:26:45.329995  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/proxy-client.key: {Name:mk6152ef8cfa7df069b89a33b0e18434a74a1888 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:26:45.330189  337862 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 18:26:45.330226  337862 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem (1082 bytes)
	I0626 18:26:45.330253  337862 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem (1123 bytes)
	I0626 18:26:45.330276  337862 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem (1679 bytes)
	I0626 18:26:45.330989  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 18:26:45.353069  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 18:26:45.373629  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 18:26:45.393862  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 18:26:45.414051  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 18:26:45.434505  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 18:26:45.454858  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 18:26:45.474871  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 18:26:45.495455  337862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 18:26:45.515994  337862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 18:26:45.530746  337862 ssh_runner.go:195] Run: openssl version
	I0626 18:26:45.535763  337862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 18:26:45.544008  337862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:26:45.547037  337862 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:26:45.547090  337862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:26:45.553180  337862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 18:26:45.560930  337862 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 18:26:45.563782  337862 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 18:26:45.563831  337862 kubeadm.go:404] StartCluster: {Name:addons-052687 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-052687 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:26:45.563955  337862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 18:26:45.563989  337862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 18:26:45.596014  337862 cri.go:89] found id: ""
	I0626 18:26:45.596075  337862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 18:26:45.604083  337862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 18:26:45.611704  337862 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0626 18:26:45.611748  337862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 18:26:45.619305  337862 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 18:26:45.619352  337862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0626 18:26:45.695713  337862 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-gcp\n", err: exit status 1
	I0626 18:26:45.759706  337862 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 18:26:54.264744  337862 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 18:26:54.264851  337862 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 18:26:54.264970  337862 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0626 18:26:54.265046  337862 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1036-gcp
	I0626 18:26:54.265118  337862 kubeadm.go:322] OS: Linux
	I0626 18:26:54.265196  337862 kubeadm.go:322] CGROUPS_CPU: enabled
	I0626 18:26:54.265257  337862 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0626 18:26:54.265326  337862 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0626 18:26:54.265384  337862 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0626 18:26:54.265456  337862 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0626 18:26:54.265523  337862 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0626 18:26:54.265584  337862 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0626 18:26:54.265642  337862 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0626 18:26:54.265707  337862 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0626 18:26:54.265801  337862 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 18:26:54.265915  337862 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 18:26:54.265993  337862 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 18:26:54.266050  337862 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 18:26:54.267545  337862 out.go:204]   - Generating certificates and keys ...
	I0626 18:26:54.267629  337862 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 18:26:54.267687  337862 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 18:26:54.267750  337862 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0626 18:26:54.267796  337862 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0626 18:26:54.267848  337862 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0626 18:26:54.267890  337862 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0626 18:26:54.267936  337862 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0626 18:26:54.268032  337862 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-052687 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0626 18:26:54.268077  337862 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0626 18:26:54.268168  337862 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-052687 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0626 18:26:54.268228  337862 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0626 18:26:54.268279  337862 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0626 18:26:54.268360  337862 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0626 18:26:54.268437  337862 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 18:26:54.268531  337862 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 18:26:54.268590  337862 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 18:26:54.268703  337862 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 18:26:54.268774  337862 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 18:26:54.268923  337862 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 18:26:54.269024  337862 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 18:26:54.269080  337862 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 18:26:54.269141  337862 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 18:26:54.271236  337862 out.go:204]   - Booting up control plane ...
	I0626 18:26:54.271309  337862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 18:26:54.271374  337862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 18:26:54.271431  337862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 18:26:54.271508  337862 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 18:26:54.271642  337862 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 18:26:54.271721  337862 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.501781 seconds
	I0626 18:26:54.271807  337862 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 18:26:54.271899  337862 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 18:26:54.271943  337862 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 18:26:54.272094  337862 kubeadm.go:322] [mark-control-plane] Marking the node addons-052687 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 18:26:54.272151  337862 kubeadm.go:322] [bootstrap-token] Using token: zs0h30.l3jy7jkljoyx3v6v
	I0626 18:26:54.273355  337862 out.go:204]   - Configuring RBAC rules ...
	I0626 18:26:54.273436  337862 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 18:26:54.273510  337862 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 18:26:54.273640  337862 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 18:26:54.273749  337862 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 18:26:54.273837  337862 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 18:26:54.273927  337862 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 18:26:54.274046  337862 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 18:26:54.274113  337862 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 18:26:54.274150  337862 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 18:26:54.274156  337862 kubeadm.go:322] 
	I0626 18:26:54.274208  337862 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 18:26:54.274215  337862 kubeadm.go:322] 
	I0626 18:26:54.274275  337862 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 18:26:54.274281  337862 kubeadm.go:322] 
	I0626 18:26:54.274305  337862 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 18:26:54.274353  337862 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 18:26:54.274395  337862 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 18:26:54.274402  337862 kubeadm.go:322] 
	I0626 18:26:54.274440  337862 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 18:26:54.274446  337862 kubeadm.go:322] 
	I0626 18:26:54.274494  337862 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 18:26:54.274499  337862 kubeadm.go:322] 
	I0626 18:26:54.274543  337862 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 18:26:54.274598  337862 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 18:26:54.274652  337862 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 18:26:54.274661  337862 kubeadm.go:322] 
	I0626 18:26:54.274725  337862 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 18:26:54.274786  337862 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 18:26:54.274792  337862 kubeadm.go:322] 
	I0626 18:26:54.274855  337862 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zs0h30.l3jy7jkljoyx3v6v \
	I0626 18:26:54.274937  337862 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:de006eb5b127e50d4fc17a3a52624114d6dd8c90abcc2a4dd7bcc578abe0baac \
	I0626 18:26:54.274955  337862 kubeadm.go:322] 	--control-plane 
	I0626 18:26:54.274960  337862 kubeadm.go:322] 
	I0626 18:26:54.275034  337862 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 18:26:54.275049  337862 kubeadm.go:322] 
	I0626 18:26:54.275119  337862 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zs0h30.l3jy7jkljoyx3v6v \
	I0626 18:26:54.275211  337862 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:de006eb5b127e50d4fc17a3a52624114d6dd8c90abcc2a4dd7bcc578abe0baac 
	I0626 18:26:54.275223  337862 cni.go:84] Creating CNI manager for ""
	I0626 18:26:54.275232  337862 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0626 18:26:54.276740  337862 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0626 18:26:54.277965  337862 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0626 18:26:54.292959  337862 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0626 18:26:54.292978  337862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0626 18:26:54.310725  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0626 18:26:54.925189  337862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 18:26:54.925332  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:26:54.925343  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=addons-052687 minikube.k8s.io/updated_at=2023_06_26T18_26_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:26:54.932165  337862 ops.go:34] apiserver oom_adj: -16
	I0626 18:26:55.015952  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:26:55.579847  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:26:56.079511  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:26:56.579913  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:26:57.079848  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:26:57.579276  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:26:58.079817  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:26:58.579952  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:26:59.079373  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:26:59.579725  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:00.080242  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:00.580051  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:01.079267  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:01.580186  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:02.079906  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:02.580171  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:03.080098  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:03.579294  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:04.080198  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:04.580255  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:05.079322  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:05.579620  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:06.079224  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:06.579240  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:07.080035  337862 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:27:07.146671  337862 kubeadm.go:1081] duration metric: took 12.22140094s to wait for elevateKubeSystemPrivileges.
	I0626 18:27:07.146708  337862 kubeadm.go:406] StartCluster complete in 21.582884189s
	I0626 18:27:07.146735  337862 settings.go:142] acquiring lock: {Name:mkb5ecb1b3f16a0c9ac49740714c898cb701a346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:27:07.146844  337862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:27:07.147203  337862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/kubeconfig: {Name:mk4c2529327c78ca1f9c9f9cbf169818d7b9a7d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:27:07.147387  337862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 18:27:07.147527  337862 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0626 18:27:07.147604  337862 config.go:182] Loaded profile config "addons-052687": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:27:07.147649  337862 addons.go:66] Setting ingress=true in profile "addons-052687"
	I0626 18:27:07.147666  337862 addons.go:66] Setting registry=true in profile "addons-052687"
	I0626 18:27:07.147670  337862 addons.go:66] Setting metrics-server=true in profile "addons-052687"
	I0626 18:27:07.147722  337862 addons.go:66] Setting inspektor-gadget=true in profile "addons-052687"
	I0626 18:27:07.147757  337862 addons.go:228] Setting addon metrics-server=true in "addons-052687"
	I0626 18:27:07.147769  337862 addons.go:228] Setting addon inspektor-gadget=true in "addons-052687"
	I0626 18:27:07.147822  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.147667  337862 addons.go:66] Setting default-storageclass=true in profile "addons-052687"
	I0626 18:27:07.147823  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.147859  337862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-052687"
	I0626 18:27:07.147677  337862 addons.go:228] Setting addon ingress=true in "addons-052687"
	I0626 18:27:07.147982  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.147683  337862 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-052687"
	I0626 18:27:07.148104  337862 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-052687"
	I0626 18:27:07.148149  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.147684  337862 addons.go:228] Setting addon registry=true in "addons-052687"
	I0626 18:27:07.148224  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.148248  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.148310  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.148390  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.148459  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.147691  337862 addons.go:66] Setting ingress-dns=true in profile "addons-052687"
	I0626 18:27:07.148490  337862 addons.go:228] Setting addon ingress-dns=true in "addons-052687"
	I0626 18:27:07.148535  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.147691  337862 addons.go:66] Setting storage-provisioner=true in profile "addons-052687"
	I0626 18:27:07.148576  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.148579  337862 addons.go:228] Setting addon storage-provisioner=true in "addons-052687"
	I0626 18:27:07.148620  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.148652  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.147698  337862 addons.go:66] Setting gcp-auth=true in profile "addons-052687"
	I0626 18:27:07.148849  337862 mustload.go:65] Loading cluster: addons-052687
	I0626 18:27:07.148951  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.149024  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.149036  337862 config.go:182] Loaded profile config "addons-052687": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:27:07.149251  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.147707  337862 addons.go:66] Setting helm-tiller=true in profile "addons-052687"
	I0626 18:27:07.149639  337862 addons.go:228] Setting addon helm-tiller=true in "addons-052687"
	I0626 18:27:07.149683  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.147650  337862 addons.go:66] Setting volumesnapshots=true in profile "addons-052687"
	I0626 18:27:07.149736  337862 addons.go:228] Setting addon volumesnapshots=true in "addons-052687"
	I0626 18:27:07.149777  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.147678  337862 addons.go:66] Setting cloud-spanner=true in profile "addons-052687"
	I0626 18:27:07.149833  337862 addons.go:228] Setting addon cloud-spanner=true in "addons-052687"
	I0626 18:27:07.149882  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.150126  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.150299  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.150323  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.174261  337862 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0626 18:27:07.176268  337862 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0626 18:27:07.177747  337862 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0626 18:27:07.179159  337862 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0626 18:27:07.180644  337862 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0626 18:27:07.182015  337862 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0626 18:27:07.183459  337862 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0626 18:27:07.183484  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0626 18:27:07.182075  337862 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0626 18:27:07.183548  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:07.183586  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.185335  337862 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0626 18:27:07.187959  337862 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0626 18:27:07.189275  337862 addons.go:420] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0626 18:27:07.189304  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0626 18:27:07.189367  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:07.191312  337862 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.17.0
	I0626 18:27:07.191330  337862 out.go:177]   - Using image docker.io/registry:2.8.1
	I0626 18:27:07.192696  337862 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0626 18:27:07.195772  337862 addons.go:420] installing /etc/kubernetes/addons/registry-rc.yaml
	I0626 18:27:07.195793  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0626 18:27:07.195860  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:07.197584  337862 addons.go:420] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0626 18:27:07.197603  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0626 18:27:07.197658  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:07.210320  337862 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0626 18:27:07.211699  337862 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0626 18:27:07.211721  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0626 18:27:07.211799  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:07.229845  337862 addons.go:228] Setting addon default-storageclass=true in "addons-052687"
	I0626 18:27:07.229905  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:07.230279  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:07.237107  337862 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0626 18:27:07.239548  337862 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0626 18:27:07.239569  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0626 18:27:07.239641  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:07.237001  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:07.241364  337862 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0626 18:27:07.240155  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:07.246595  337862 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.6
	I0626 18:27:07.252345  337862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 18:27:07.250448  337862 addons.go:420] installing /etc/kubernetes/addons/deployment.yaml
	I0626 18:27:07.250465  337862 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.0
	I0626 18:27:07.251530  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:07.255401  337862 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0626 18:27:07.254098  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0626 18:27:07.254191  337862 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 18:27:07.257378  337862 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0626 18:27:07.257454  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:07.257805  337862 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 18:27:07.258670  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 18:27:07.258730  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:07.258895  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 18:27:07.258933  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:07.260715  337862 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0626 18:27:07.259164  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0626 18:27:07.260795  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:07.262137  337862 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0626 18:27:07.262148  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0626 18:27:07.262182  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:07.264023  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:07.264341  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:07.271771  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:07.282593  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:07.283655  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:07.286980  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:07.287242  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:07.287865  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	W0626 18:27:07.296065  337862 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0626 18:27:07.296099  337862 retry.go:31] will retry after 286.975276ms: ssh: handshake failed: EOF
	I0626 18:27:07.518452  337862 addons.go:420] installing /etc/kubernetes/addons/registry-svc.yaml
	I0626 18:27:07.518484  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0626 18:27:07.594167  337862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 18:27:07.693790  337862 addons.go:420] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0626 18:27:07.693890  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0626 18:27:07.694156  337862 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0626 18:27:07.694204  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0626 18:27:07.696649  337862 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0626 18:27:07.696712  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0626 18:27:07.702729  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0626 18:27:07.702997  337862 addons.go:420] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0626 18:27:07.703061  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0626 18:27:07.704459  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0626 18:27:07.707506  337862 addons.go:420] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0626 18:27:07.707533  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0626 18:27:07.792851  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0626 18:27:07.809416  337862 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-052687" context rescaled to 1 replicas
	I0626 18:27:07.809471  337862 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 18:27:07.811626  337862 out.go:177] * Verifying Kubernetes components...
	I0626 18:27:07.812935  337862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 18:27:07.817078  337862 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0626 18:27:07.817115  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0626 18:27:07.892836  337862 addons.go:420] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0626 18:27:07.892942  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0626 18:27:07.894008  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 18:27:07.896218  337862 addons.go:420] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0626 18:27:07.896275  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0626 18:27:07.906819  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0626 18:27:07.910124  337862 addons.go:420] installing /etc/kubernetes/addons/ig-role.yaml
	I0626 18:27:07.910154  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0626 18:27:07.914721  337862 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0626 18:27:07.914750  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0626 18:27:08.102339  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0626 18:27:08.108439  337862 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 18:27:08.108522  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0626 18:27:08.111786  337862 addons.go:420] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0626 18:27:08.111863  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0626 18:27:08.196740  337862 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0626 18:27:08.196829  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0626 18:27:08.293622  337862 addons.go:420] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0626 18:27:08.293749  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0626 18:27:08.393532  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 18:27:08.395658  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0626 18:27:08.412275  337862 addons.go:420] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0626 18:27:08.412306  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0626 18:27:08.494407  337862 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0626 18:27:08.494495  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0626 18:27:08.699046  337862 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0626 18:27:08.699143  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0626 18:27:08.810777  337862 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0626 18:27:08.810883  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0626 18:27:08.897054  337862 addons.go:420] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0626 18:27:08.897136  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0626 18:27:09.009952  337862 addons.go:420] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0626 18:27:09.010051  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0626 18:27:09.194955  337862 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0626 18:27:09.195048  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0626 18:27:09.202932  337862 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0626 18:27:09.203034  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0626 18:27:09.606543  337862 addons.go:420] installing /etc/kubernetes/addons/ig-crd.yaml
	I0626 18:27:09.606648  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0626 18:27:09.607654  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0626 18:27:09.904330  337862 addons.go:420] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0626 18:27:09.904411  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0626 18:27:09.915132  337862 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0626 18:27:09.915171  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0626 18:27:10.110085  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0626 18:27:10.200647  337862 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0626 18:27:10.200725  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0626 18:27:10.493656  337862 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0626 18:27:10.493736  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0626 18:27:10.693614  337862 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0626 18:27:10.693704  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0626 18:27:10.809557  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0626 18:27:11.093943  337862 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.499643676s)
	I0626 18:27:11.094029  337862 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0626 18:27:11.698408  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.993882954s)
	I0626 18:27:11.698534  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.995545684s)
	I0626 18:27:13.398494  337862 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.585438651s)
	I0626 18:27:13.398540  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.504463005s)
	I0626 18:27:13.398589  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.605619597s)
	I0626 18:27:13.398612  337862 addons.go:464] Verifying addon ingress=true in "addons-052687"
	I0626 18:27:13.398636  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.49178405s)
	I0626 18:27:13.398651  337862 addons.go:464] Verifying addon registry=true in "addons-052687"
	I0626 18:27:13.401254  337862 out.go:177] * Verifying registry addon...
	I0626 18:27:13.398757  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.296317224s)
	I0626 18:27:13.398791  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.005171373s)
	I0626 18:27:13.398860  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.00314413s)
	I0626 18:27:13.398985  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.791271811s)
	I0626 18:27:13.399050  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.288861012s)
	I0626 18:27:13.399889  337862 node_ready.go:35] waiting up to 6m0s for node "addons-052687" to be "Ready" ...
	I0626 18:27:13.402865  337862 out.go:177] * Verifying ingress addon...
	W0626 18:27:13.403014  337862 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0626 18:27:13.404502  337862 retry.go:31] will retry after 289.28906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0626 18:27:13.402977  337862 addons.go:464] Verifying addon metrics-server=true in "addons-052687"
	I0626 18:27:13.403818  337862 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0626 18:27:13.405378  337862 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0626 18:27:13.411249  337862 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0626 18:27:13.411266  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:13.411600  337862 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0626 18:27:13.411622  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:13.694283  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0626 18:27:13.915080  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:13.915409  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:13.995178  337862 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0626 18:27:13.995244  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:14.014672  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:14.205211  337862 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0626 18:27:14.224420  337862 addons.go:228] Setting addon gcp-auth=true in "addons-052687"
	I0626 18:27:14.224479  337862 host.go:66] Checking if "addons-052687" exists ...
	I0626 18:27:14.225613  337862 cli_runner.go:164] Run: docker container inspect addons-052687 --format={{.State.Status}}
	I0626 18:27:14.229835  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.420142026s)
	I0626 18:27:14.229876  337862 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-052687"
	I0626 18:27:14.231917  337862 out.go:177] * Verifying csi-hostpath-driver addon...
	I0626 18:27:14.234258  337862 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0626 18:27:14.246344  337862 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0626 18:27:14.246389  337862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-052687
	I0626 18:27:14.261987  337862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33082 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/addons-052687/id_rsa Username:docker}
	I0626 18:27:14.297982  337862 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0626 18:27:14.298012  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:14.415976  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:14.416344  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:14.802752  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:14.896835  337862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.202489831s)
	I0626 18:27:14.898624  337862 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0626 18:27:14.900284  337862 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0626 18:27:14.901587  337862 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0626 18:27:14.901604  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0626 18:27:14.915455  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:14.915840  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:14.919424  337862 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0626 18:27:14.919444  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0626 18:27:14.935728  337862 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0626 18:27:14.935747  337862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0626 18:27:14.951901  337862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0626 18:27:15.302065  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:15.393494  337862 addons.go:464] Verifying addon gcp-auth=true in "addons-052687"
	I0626 18:27:15.396004  337862 out.go:177] * Verifying gcp-auth addon...
	I0626 18:27:15.398277  337862 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0626 18:27:15.401198  337862 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0626 18:27:15.401215  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:15.411169  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:15.415118  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:15.415355  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:15.803968  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:15.914137  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:15.917141  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:15.917730  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:16.303930  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:16.405590  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:16.415625  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:16.416260  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:16.803287  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:16.905686  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:16.916630  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:16.917194  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:17.304381  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:17.406350  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:17.415116  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:17.496842  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:17.499099  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:17.807364  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:17.905714  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:17.915895  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:17.916231  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:18.304567  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:18.406172  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:18.418054  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:18.418315  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:18.802671  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:18.905504  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:18.918052  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:18.918401  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:19.302902  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:19.405630  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:19.415899  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:19.416450  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:19.802235  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:19.905869  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:19.915103  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:19.916311  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:19.916489  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:20.304055  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:20.404918  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:20.414427  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:20.415572  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:20.802418  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:20.905656  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:20.915202  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:20.915318  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:21.302487  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:21.405011  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:21.415284  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:21.415453  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:21.802529  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:21.905331  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:21.915069  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:21.915759  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:22.302364  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:22.405429  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:22.412057  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:22.415178  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:22.415517  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:22.802077  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:22.905171  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:22.914549  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:22.914937  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:23.302757  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:23.404424  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:23.416660  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:23.416847  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:23.803000  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:23.905151  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:23.914892  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:23.916000  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:24.303265  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:24.405608  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:24.412237  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:24.415253  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:24.415475  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:24.802051  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:24.905296  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:24.914840  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:24.914993  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:25.303498  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:25.405539  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:25.415378  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:25.415686  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:25.802699  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:25.904979  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:25.915248  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:25.915378  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:26.303170  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:26.404986  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:26.415774  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:26.416008  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:26.802632  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:26.905351  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:26.911514  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:26.914897  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:26.915244  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:27.302898  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:27.404932  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:27.415477  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:27.415591  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:27.802901  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:27.904918  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:27.915487  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:27.915785  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:28.303509  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:28.405668  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:28.415456  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:28.415498  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:28.803048  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:28.904755  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:28.912008  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:28.914971  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:28.915172  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:29.303001  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:29.405101  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:29.415115  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:29.415237  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:29.802439  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:29.905750  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:29.915094  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:29.915403  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:30.302723  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:30.404559  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:30.414876  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:30.415139  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:30.802842  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:30.904700  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:30.912250  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:30.914930  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:30.915038  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:31.302346  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:31.405292  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:31.414986  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:31.415129  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:31.802371  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:31.905301  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:31.914898  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:31.917311  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:32.302693  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:32.404690  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:32.415158  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:32.415194  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:32.802745  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:32.905113  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:32.914622  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:32.915018  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:33.303297  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:33.405300  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:33.411991  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:33.415049  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:33.415240  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:33.802954  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:33.904763  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:33.915172  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:33.915198  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:34.303096  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:34.405525  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:34.414844  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:34.414976  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:34.802328  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:34.905512  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:34.914807  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:34.914997  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:35.302246  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:35.406167  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:35.426164  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:35.426413  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:35.426982  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:35.802930  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:35.904989  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:35.915018  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:35.915170  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:36.302240  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:36.405173  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:36.414492  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:36.415205  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:36.802834  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:36.904441  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:36.914628  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:36.915185  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:37.303274  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:37.405065  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:37.414835  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:37.415367  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:37.803380  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:37.905067  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:37.911886  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:37.914805  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:37.915356  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:38.303321  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:38.405597  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:38.414941  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:38.414940  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:38.802318  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:38.905168  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:38.914627  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:38.914712  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:39.302134  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:39.405040  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:39.414545  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:39.414849  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:39.802544  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:39.905483  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:39.912545  337862 node_ready.go:58] node "addons-052687" has status "Ready":"False"
	I0626 18:27:39.917842  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:39.918205  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:40.303032  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:40.404994  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:40.415003  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:40.415225  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:40.802023  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:40.905326  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:40.914692  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:40.914999  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:41.305893  337862 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0626 18:27:41.305921  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:41.405074  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:41.412317  337862 node_ready.go:49] node "addons-052687" has status "Ready":"True"
	I0626 18:27:41.412343  337862 node_ready.go:38] duration metric: took 28.009310545s waiting for node "addons-052687" to be "Ready" ...
	I0626 18:27:41.412354  337862 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 18:27:41.418261  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:41.419138  337862 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0626 18:27:41.419160  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:41.424371  337862 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-6btv6" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:41.804232  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:41.905613  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:41.916160  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:41.916225  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:42.303719  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:42.405819  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:42.415826  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:42.417313  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:42.804729  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:42.904842  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:42.915593  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:42.916209  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:42.935550  337862 pod_ready.go:92] pod "coredns-5d78c9869d-6btv6" in "kube-system" namespace has status "Ready":"True"
	I0626 18:27:42.935571  337862 pod_ready.go:81] duration metric: took 1.511170705s waiting for pod "coredns-5d78c9869d-6btv6" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:42.935594  337862 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-052687" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:42.940795  337862 pod_ready.go:92] pod "etcd-addons-052687" in "kube-system" namespace has status "Ready":"True"
	I0626 18:27:42.940821  337862 pod_ready.go:81] duration metric: took 5.219323ms waiting for pod "etcd-addons-052687" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:42.940837  337862 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-052687" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:42.946015  337862 pod_ready.go:92] pod "kube-apiserver-addons-052687" in "kube-system" namespace has status "Ready":"True"
	I0626 18:27:42.946033  337862 pod_ready.go:81] duration metric: took 5.189111ms waiting for pod "kube-apiserver-addons-052687" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:42.946043  337862 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-052687" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:42.997121  337862 pod_ready.go:92] pod "kube-controller-manager-addons-052687" in "kube-system" namespace has status "Ready":"True"
	I0626 18:27:42.997149  337862 pod_ready.go:81] duration metric: took 51.0994ms waiting for pod "kube-controller-manager-addons-052687" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:42.997166  337862 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-222zw" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:43.013403  337862 pod_ready.go:92] pod "kube-proxy-222zw" in "kube-system" namespace has status "Ready":"True"
	I0626 18:27:43.013425  337862 pod_ready.go:81] duration metric: took 16.251986ms waiting for pod "kube-proxy-222zw" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:43.013439  337862 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-052687" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:43.304278  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:43.405138  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:43.412322  337862 pod_ready.go:92] pod "kube-scheduler-addons-052687" in "kube-system" namespace has status "Ready":"True"
	I0626 18:27:43.412348  337862 pod_ready.go:81] duration metric: took 398.90125ms waiting for pod "kube-scheduler-addons-052687" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:43.412361  337862 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-6zgpf" in "kube-system" namespace to be "Ready" ...
	I0626 18:27:43.416912  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:43.420384  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:43.803601  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:43.905141  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:43.916007  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:43.916176  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:44.304399  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:44.405438  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:44.416094  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:44.416250  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:44.803256  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:44.904928  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:44.915641  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:44.915697  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:45.303139  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:45.405325  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:45.415601  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:45.416053  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:45.803941  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:45.819435  337862 pod_ready.go:102] pod "metrics-server-844d8db974-6zgpf" in "kube-system" namespace has status "Ready":"False"
	I0626 18:27:45.905586  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:45.916493  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:45.916601  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:46.303168  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:46.404642  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:46.415849  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:46.415925  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:46.803726  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:46.905121  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:46.917258  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:46.918690  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:47.303602  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:47.404699  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:47.417439  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:47.417480  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:47.803647  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:47.821226  337862 pod_ready.go:102] pod "metrics-server-844d8db974-6zgpf" in "kube-system" namespace has status "Ready":"False"
	I0626 18:27:47.905473  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:47.916478  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:47.916594  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:48.303862  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:48.405711  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:48.426621  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:48.426676  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:48.803574  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:48.904959  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:48.915747  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:48.916760  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:49.304375  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:49.405243  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:49.416816  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:49.416856  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:49.804803  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:49.904553  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:49.915918  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:49.915969  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:50.304036  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:50.334802  337862 pod_ready.go:102] pod "metrics-server-844d8db974-6zgpf" in "kube-system" namespace has status "Ready":"False"
	I0626 18:27:50.404814  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:50.415548  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:50.416009  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:50.803974  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:50.904771  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:50.915969  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:50.916116  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:51.303305  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:51.404857  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:51.415184  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:51.415343  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:51.803484  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:51.906008  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:51.916662  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:51.917414  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:52.304501  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:52.405752  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:52.415946  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:52.416158  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:52.805820  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:52.820216  337862 pod_ready.go:102] pod "metrics-server-844d8db974-6zgpf" in "kube-system" namespace has status "Ready":"False"
	I0626 18:27:52.904845  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:52.916797  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:52.916894  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:53.305109  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:53.405005  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:53.417083  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:53.417757  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:53.804831  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:53.905289  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:53.916292  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:53.916616  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:54.304980  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:54.405723  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:54.415809  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:54.415914  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:54.804706  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:54.905231  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:54.916211  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:54.916252  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:55.305407  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:55.320295  337862 pod_ready.go:102] pod "metrics-server-844d8db974-6zgpf" in "kube-system" namespace has status "Ready":"False"
	I0626 18:27:55.405343  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:55.418417  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:55.418463  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:55.819751  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:55.905338  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:55.916500  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:55.916646  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:56.303789  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:56.404998  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:56.415757  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:56.415842  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:56.857194  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:56.926496  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:56.927397  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:56.927578  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:57.304917  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:57.404798  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:57.416327  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:57.416587  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:57.804552  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:57.820292  337862 pod_ready.go:102] pod "metrics-server-844d8db974-6zgpf" in "kube-system" namespace has status "Ready":"False"
	I0626 18:27:57.905383  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:57.915547  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:57.915821  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:58.303461  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:58.405597  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:58.415355  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:58.415660  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:58.803369  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:58.911262  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:58.915725  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:58.917452  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:59.303713  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:59.406187  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:59.416719  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:27:59.417523  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:59.803392  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:27:59.905316  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:27:59.916518  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:27:59.916581  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:00.303591  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:00.319600  337862 pod_ready.go:102] pod "metrics-server-844d8db974-6zgpf" in "kube-system" namespace has status "Ready":"False"
	I0626 18:28:00.404757  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:00.415141  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:00.415400  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:00.804709  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:00.905201  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:00.915774  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:00.915929  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:01.303922  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:01.405383  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:01.416400  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:01.416451  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:01.803357  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:01.819816  337862 pod_ready.go:92] pod "metrics-server-844d8db974-6zgpf" in "kube-system" namespace has status "Ready":"True"
	I0626 18:28:01.819843  337862 pod_ready.go:81] duration metric: took 18.407472811s waiting for pod "metrics-server-844d8db974-6zgpf" in "kube-system" namespace to be "Ready" ...
	I0626 18:28:01.819868  337862 pod_ready.go:38] duration metric: took 20.407487854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 18:28:01.819891  337862 api_server.go:52] waiting for apiserver process to appear ...
	I0626 18:28:01.819952  337862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 18:28:01.832231  337862 api_server.go:72] duration metric: took 54.022696444s to wait for apiserver process to appear ...
	I0626 18:28:01.832261  337862 api_server.go:88] waiting for apiserver healthz status ...
	I0626 18:28:01.832281  337862 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0626 18:28:01.838310  337862 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0626 18:28:01.839412  337862 api_server.go:141] control plane version: v1.27.3
	I0626 18:28:01.839435  337862 api_server.go:131] duration metric: took 7.16789ms to wait for apiserver health ...
	I0626 18:28:01.839444  337862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 18:28:01.847489  337862 system_pods.go:59] 18 kube-system pods found
	I0626 18:28:01.847538  337862 system_pods.go:61] "coredns-5d78c9869d-6btv6" [a1a9b44b-be13-4be2-b644-f62a14f5f5c3] Running
	I0626 18:28:01.847547  337862 system_pods.go:61] "csi-hostpath-attacher-0" [c1c8e0d0-3196-4bb8-a1b3-6d81895b275e] Running
	I0626 18:28:01.847554  337862 system_pods.go:61] "csi-hostpath-resizer-0" [6a06e283-d91d-4d23-bf81-f59efcecc254] Running
	I0626 18:28:01.847566  337862 system_pods.go:61] "csi-hostpathplugin-znfx5" [b0ec9c5e-f93d-4586-821b-9be6c3f22f45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0626 18:28:01.847582  337862 system_pods.go:61] "etcd-addons-052687" [97204089-f795-4bb1-b553-11dafb9eb821] Running
	I0626 18:28:01.847590  337862 system_pods.go:61] "kindnet-5jww5" [5a6f2883-9dca-43a8-9129-9eb2680d84bd] Running
	I0626 18:28:01.847599  337862 system_pods.go:61] "kube-apiserver-addons-052687" [e6c159ce-e40e-4ea9-b680-32f86abee8b3] Running
	I0626 18:28:01.847605  337862 system_pods.go:61] "kube-controller-manager-addons-052687" [2c5e65f0-4b84-476d-a95b-a829ac30beb3] Running
	I0626 18:28:01.847612  337862 system_pods.go:61] "kube-ingress-dns-minikube" [6dd92797-288f-4583-b8ea-e5ce940c466b] Running
	I0626 18:28:01.847616  337862 system_pods.go:61] "kube-proxy-222zw" [74adc10f-7f95-491e-9316-018346c806d1] Running
	I0626 18:28:01.847623  337862 system_pods.go:61] "kube-scheduler-addons-052687" [743b8463-8a12-4fe1-b9af-1ebc6c173074] Running
	I0626 18:28:01.847628  337862 system_pods.go:61] "metrics-server-844d8db974-6zgpf" [82860bec-db0a-4d75-a5af-543a8abf33c3] Running
	I0626 18:28:01.847636  337862 system_pods.go:61] "registry-2sks6" [26f467fb-cc2f-4224-9db7-9f814feb6f78] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0626 18:28:01.847644  337862 system_pods.go:61] "registry-proxy-g4qzm" [c00bd5d7-17c7-4e9d-ab97-78cf04ed731d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0626 18:28:01.847653  337862 system_pods.go:61] "snapshot-controller-75bbb956b9-qv5fs" [8d68875f-81d4-4bfd-bc43-57583b2bbd73] Running
	I0626 18:28:01.847663  337862 system_pods.go:61] "snapshot-controller-75bbb956b9-zrllj" [4c5222ae-67a9-4bd7-88fb-313a803bac0d] Running
	I0626 18:28:01.847675  337862 system_pods.go:61] "storage-provisioner" [c1b6eb9e-33cf-4d48-bac0-9327b22594d4] Running
	I0626 18:28:01.847683  337862 system_pods.go:61] "tiller-deploy-6847666dc-fxdms" [91bdb879-b774-43fb-a404-ab2bdc6d3ef4] Running
	I0626 18:28:01.847695  337862 system_pods.go:74] duration metric: took 8.244998ms to wait for pod list to return data ...
	I0626 18:28:01.847706  337862 default_sa.go:34] waiting for default service account to be created ...
	I0626 18:28:01.849930  337862 default_sa.go:45] found service account: "default"
	I0626 18:28:01.849953  337862 default_sa.go:55] duration metric: took 2.238312ms for default service account to be created ...
	I0626 18:28:01.849961  337862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 18:28:01.859187  337862 system_pods.go:86] 18 kube-system pods found
	I0626 18:28:01.859214  337862 system_pods.go:89] "coredns-5d78c9869d-6btv6" [a1a9b44b-be13-4be2-b644-f62a14f5f5c3] Running
	I0626 18:28:01.859222  337862 system_pods.go:89] "csi-hostpath-attacher-0" [c1c8e0d0-3196-4bb8-a1b3-6d81895b275e] Running
	I0626 18:28:01.859228  337862 system_pods.go:89] "csi-hostpath-resizer-0" [6a06e283-d91d-4d23-bf81-f59efcecc254] Running
	I0626 18:28:01.859241  337862 system_pods.go:89] "csi-hostpathplugin-znfx5" [b0ec9c5e-f93d-4586-821b-9be6c3f22f45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0626 18:28:01.859250  337862 system_pods.go:89] "etcd-addons-052687" [97204089-f795-4bb1-b553-11dafb9eb821] Running
	I0626 18:28:01.859258  337862 system_pods.go:89] "kindnet-5jww5" [5a6f2883-9dca-43a8-9129-9eb2680d84bd] Running
	I0626 18:28:01.859265  337862 system_pods.go:89] "kube-apiserver-addons-052687" [e6c159ce-e40e-4ea9-b680-32f86abee8b3] Running
	I0626 18:28:01.859276  337862 system_pods.go:89] "kube-controller-manager-addons-052687" [2c5e65f0-4b84-476d-a95b-a829ac30beb3] Running
	I0626 18:28:01.859285  337862 system_pods.go:89] "kube-ingress-dns-minikube" [6dd92797-288f-4583-b8ea-e5ce940c466b] Running
	I0626 18:28:01.859294  337862 system_pods.go:89] "kube-proxy-222zw" [74adc10f-7f95-491e-9316-018346c806d1] Running
	I0626 18:28:01.859303  337862 system_pods.go:89] "kube-scheduler-addons-052687" [743b8463-8a12-4fe1-b9af-1ebc6c173074] Running
	I0626 18:28:01.859312  337862 system_pods.go:89] "metrics-server-844d8db974-6zgpf" [82860bec-db0a-4d75-a5af-543a8abf33c3] Running
	I0626 18:28:01.859324  337862 system_pods.go:89] "registry-2sks6" [26f467fb-cc2f-4224-9db7-9f814feb6f78] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0626 18:28:01.859336  337862 system_pods.go:89] "registry-proxy-g4qzm" [c00bd5d7-17c7-4e9d-ab97-78cf04ed731d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0626 18:28:01.859346  337862 system_pods.go:89] "snapshot-controller-75bbb956b9-qv5fs" [8d68875f-81d4-4bfd-bc43-57583b2bbd73] Running
	I0626 18:28:01.859354  337862 system_pods.go:89] "snapshot-controller-75bbb956b9-zrllj" [4c5222ae-67a9-4bd7-88fb-313a803bac0d] Running
	I0626 18:28:01.859362  337862 system_pods.go:89] "storage-provisioner" [c1b6eb9e-33cf-4d48-bac0-9327b22594d4] Running
	I0626 18:28:01.859371  337862 system_pods.go:89] "tiller-deploy-6847666dc-fxdms" [91bdb879-b774-43fb-a404-ab2bdc6d3ef4] Running
	I0626 18:28:01.859382  337862 system_pods.go:126] duration metric: took 9.415801ms to wait for k8s-apps to be running ...
	I0626 18:28:01.859393  337862 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 18:28:01.859448  337862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 18:28:01.871316  337862 system_svc.go:56] duration metric: took 11.912658ms WaitForService to wait for kubelet.
	I0626 18:28:01.871345  337862 kubeadm.go:581] duration metric: took 54.06182125s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 18:28:01.871374  337862 node_conditions.go:102] verifying NodePressure condition ...
	I0626 18:28:01.874362  337862 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0626 18:28:01.874384  337862 node_conditions.go:123] node cpu capacity is 8
	I0626 18:28:01.874396  337862 node_conditions.go:105] duration metric: took 3.016884ms to run NodePressure ...
	I0626 18:28:01.874406  337862 start.go:228] waiting for startup goroutines ...
	I0626 18:28:01.905544  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:01.916164  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:01.916168  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:02.320790  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:02.433199  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:02.433793  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:02.434010  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:02.804887  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:02.905045  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:02.915485  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:02.915566  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:03.304657  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:03.404884  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:03.418119  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:03.418786  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:03.804800  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:03.905672  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:03.916668  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:03.917182  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:04.303647  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:04.405365  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:04.416169  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:04.416284  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:04.803741  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:04.905376  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:04.916651  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:04.917368  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:05.304320  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:05.405491  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:05.416317  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:05.416458  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:05.803827  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:05.905421  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:05.915767  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:05.915864  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:06.304039  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:06.405501  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:06.416016  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:06.416173  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:06.803798  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:06.905137  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:06.915583  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:06.915611  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:07.304096  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:07.404489  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:07.416107  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:07.416239  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:07.803378  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:07.904846  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:07.915485  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:07.923072  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:08.304778  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:08.405769  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:08.415950  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:08.416390  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:08.803742  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:08.905120  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:08.916284  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:08.916656  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:09.303270  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:09.405411  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:09.415991  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:09.416375  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0626 18:28:09.803025  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:09.905450  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:09.917974  337862 kapi.go:107] duration metric: took 56.514153634s to wait for kubernetes.io/minikube-addons=registry ...
	I0626 18:28:09.917974  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:10.304066  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:10.404006  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:10.415379  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:10.803089  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:10.904536  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:10.915772  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:11.303271  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:11.404660  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:11.417695  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:11.804225  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:11.905535  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:11.916184  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:12.303590  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:12.405596  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:12.416472  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:12.804239  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:12.904748  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:12.915781  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:13.303568  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:13.404517  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:13.416259  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:13.803903  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:13.905359  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:13.916249  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:14.303620  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:14.405152  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:14.415396  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:14.803638  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:14.905013  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:14.915209  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:15.303691  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:15.405002  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:15.414972  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:15.804437  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:15.904589  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:15.915848  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:16.304252  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:16.405737  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:16.416111  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:16.803620  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:16.905843  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:16.916963  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:17.303198  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:17.404707  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0626 18:28:17.416052  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:17.805319  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:18.002446  337862 kapi.go:107] duration metric: took 1m2.604162439s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0626 18:28:18.003888  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:18.080145  337862 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-052687 cluster.
	I0626 18:28:18.101923  337862 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0626 18:28:18.144833  337862 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0626 18:28:18.303009  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:18.416689  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:18.803619  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:18.915924  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:19.317828  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:19.417158  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:19.804925  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:19.917708  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:20.306550  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:20.416845  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:20.804799  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:20.917464  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:21.305021  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:21.416110  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:21.804254  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:21.916604  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:22.303305  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:22.416553  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:22.803883  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:22.916488  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:23.304124  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:23.416581  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:23.803432  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:23.916933  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:24.306282  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:24.417596  337862 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0626 18:28:24.804626  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:24.916764  337862 kapi.go:107] duration metric: took 1m11.511382554s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0626 18:28:25.304510  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:25.805909  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:26.302866  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:26.803405  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:27.304107  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:27.803884  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:28.304182  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:28.803590  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:29.303539  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:29.803993  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:30.302873  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:30.803768  337862 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0626 18:28:31.303704  337862 kapi.go:107] duration metric: took 1m17.069442784s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0626 18:28:31.305710  337862 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, helm-tiller, default-storageclass, inspektor-gadget, metrics-server, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0626 18:28:31.307348  337862 addons.go:499] enable addons completed in 1m24.15982713s: enabled=[cloud-spanner ingress-dns storage-provisioner helm-tiller default-storageclass inspektor-gadget metrics-server volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0626 18:28:31.307383  337862 start.go:233] waiting for cluster config update ...
	I0626 18:28:31.307403  337862 start.go:242] writing updated cluster config ...
	I0626 18:28:31.307667  337862 ssh_runner.go:195] Run: rm -f paused
	I0626 18:28:31.355368  337862 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 18:28:31.357222  337862 out.go:177] * Done! kubectl is now configured to use "addons-052687" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jun 26 18:31:15 addons-052687 crio[947]: time="2023-06-26 18:31:15.299510649Z" level=info msg="Removing container: a22aa581383938caaef11380d8511ed0d32c89f00a00cf70a30dff9a9957630f" id=996c6214-1baf-4d07-ba29-9240ebcfcd9e name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 26 18:31:15 addons-052687 crio[947]: time="2023-06-26 18:31:15.314411955Z" level=info msg="Removed container a22aa581383938caaef11380d8511ed0d32c89f00a00cf70a30dff9a9957630f: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=996c6214-1baf-4d07-ba29-9240ebcfcd9e name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 26 18:31:15 addons-052687 crio[947]: time="2023-06-26 18:31:15.653786026Z" level=info msg="Stopping container: 7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471 (timeout: 1s)" id=8ff4d384-dec7-485c-8c46-1d65e84efb65 name=/runtime.v1.RuntimeService/StopContainer
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.656521177Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea" id=22873d11-9eda-473a-a798-40860e838836 name=/runtime.v1.ImageService/PullImage
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.657455501Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=c208e90b-ae52-4501-9106-fc90f9ab611e name=/runtime.v1.ImageService/ImageStatus
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.658308664Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c208e90b-ae52-4501-9106-fc90f9ab611e name=/runtime.v1.ImageService/ImageStatus
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.659203686Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-wvhm8/hello-world-app" id=10aaaacb-2909-412c-a0a6-5b0aa904d4c9 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.659316540Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.699689924Z" level=warning msg="Stopping container 7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471 with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=8ff4d384-dec7-485c-8c46-1d65e84efb65 name=/runtime.v1.RuntimeService/StopContainer
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.735985166Z" level=info msg="Created container 40e0ae5fe30491d230d88da6a980ddf5b40fb2591b4e604df1bae8974f8ee646: default/hello-world-app-65bdb79f98-wvhm8/hello-world-app" id=10aaaacb-2909-412c-a0a6-5b0aa904d4c9 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.736589235Z" level=info msg="Starting container: 40e0ae5fe30491d230d88da6a980ddf5b40fb2591b4e604df1bae8974f8ee646" id=fca80498-93e3-4532-b70d-c77773b86dfe name=/runtime.v1.RuntimeService/StartContainer
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.745433975Z" level=info msg="Started container" PID=9565 containerID=40e0ae5fe30491d230d88da6a980ddf5b40fb2591b4e604df1bae8974f8ee646 description=default/hello-world-app-65bdb79f98-wvhm8/hello-world-app id=fca80498-93e3-4532-b70d-c77773b86dfe name=/runtime.v1.RuntimeService/StartContainer sandboxID=da31a93de507968ed9380e7d14364c0bb08e7409884580590095e0326ae516a1
	Jun 26 18:31:16 addons-052687 conmon[5811]: conmon 7806df4f37cbf2ddf8a1 <ninfo>: container 5823 exited with status 137
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.847827584Z" level=info msg="Stopped container 7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471: ingress-nginx/ingress-nginx-controller-7b4698b8c7-mxmfm/controller" id=8ff4d384-dec7-485c-8c46-1d65e84efb65 name=/runtime.v1.RuntimeService/StopContainer
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.848363142Z" level=info msg="Stopping pod sandbox: b5efba96a01c1b8c248c160f163534e473d2aa9c91fb6974aa2a3878130eecfe" id=423e34e5-ad8f-4218-9aa3-9b4bd36bc992 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.851290817Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-7PXRTMX5PD2HQMWR - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-5Q2TS63BAMQBBUQJ - [0:0]\n-X KUBE-HP-5Q2TS63BAMQBBUQJ\n-X KUBE-HP-7PXRTMX5PD2HQMWR\nCOMMIT\n"
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.852528431Z" level=info msg="Closing host port tcp:80"
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.852567251Z" level=info msg="Closing host port tcp:443"
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.853972578Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.853993779Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.854140515Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7b4698b8c7-mxmfm Namespace:ingress-nginx ID:b5efba96a01c1b8c248c160f163534e473d2aa9c91fb6974aa2a3878130eecfe UID:4fb955c3-4911-4d9c-b79c-d2fa0d56d850 NetNS:/var/run/netns/8b17c634-4f86-4444-83ea-920ac00c6ada Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.854273125Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7b4698b8c7-mxmfm from CNI network \"kindnet\" (type=ptp)"
	Jun 26 18:31:16 addons-052687 crio[947]: time="2023-06-26 18:31:16.886276509Z" level=info msg="Stopped pod sandbox: b5efba96a01c1b8c248c160f163534e473d2aa9c91fb6974aa2a3878130eecfe" id=423e34e5-ad8f-4218-9aa3-9b4bd36bc992 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jun 26 18:31:17 addons-052687 crio[947]: time="2023-06-26 18:31:17.305949216Z" level=info msg="Removing container: 7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471" id=1b443343-66c7-4d8e-9517-b8b17ac761ad name=/runtime.v1.RuntimeService/RemoveContainer
	Jun 26 18:31:17 addons-052687 crio[947]: time="2023-06-26 18:31:17.323506272Z" level=info msg="Removed container 7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471: ingress-nginx/ingress-nginx-controller-7b4698b8c7-mxmfm/controller" id=1b443343-66c7-4d8e-9517-b8b17ac761ad name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	40e0ae5fe3049       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      7 seconds ago       Running             hello-world-app           0                   da31a93de5079       hello-world-app-65bdb79f98-wvhm8
	2ed27132aab7d       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                              2 minutes ago       Running             nginx                     0                   0f89efe293dae       nginx
	d2c59df98e6ba       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        2 minutes ago       Running             headlamp                  0                   c381606710cd0       headlamp-66f6498c69-pbtjd
	42f402b80e33b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   4f3fb44eb3474       gcp-auth-58478865f7-t8lt5
	c295c2d42b587       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   9d7cfa119516f       ingress-nginx-admission-patch-jphgq
	1eccd2307b8dd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   d1d635c320261       ingress-nginx-admission-create-vnwbc
	97624b14376be       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   3875425add50c       storage-provisioner
	558a55720669b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   fcc14361bd42b       coredns-5d78c9869d-6btv6
	d9e2b59e89149       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                                             4 minutes ago       Running             kube-proxy                0                   ff7cfd17b5ce2       kube-proxy-222zw
	1e9b3c3f54a13       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                                             4 minutes ago       Running             kindnet-cni               0                   3414dd59ad685       kindnet-5jww5
	316e14e32ba96       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                                             4 minutes ago       Running             kube-scheduler            0                   1a40a5c448258       kube-scheduler-addons-052687
	84b8d6c00211f       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                                             4 minutes ago       Running             kube-controller-manager   0                   f69da00f143da       kube-controller-manager-addons-052687
	fe0e12094f1f3       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   4d535679dca3d       etcd-addons-052687
	fda0236f79615       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                                             4 minutes ago       Running             kube-apiserver            0                   af12d50ef6f06       kube-apiserver-addons-052687
	
	* 
	* ==> coredns [558a55720669b25c153ffa48c1fd5962cf1e9b2ad6ac769d7df916c5dc87c049] <==
	* [INFO] 10.244.0.12:46990 - 8213 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083407s
	[INFO] 10.244.0.12:43116 - 31358 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003309474s
	[INFO] 10.244.0.12:43116 - 33405 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.003408587s
	[INFO] 10.244.0.12:38913 - 45329 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003013667s
	[INFO] 10.244.0.12:38913 - 50451 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004081813s
	[INFO] 10.244.0.12:59299 - 26380 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003272289s
	[INFO] 10.244.0.12:59299 - 65298 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003333578s
	[INFO] 10.244.0.12:35163 - 30196 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000043643s
	[INFO] 10.244.0.12:35163 - 8690 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080864s
	[INFO] 10.244.0.17:33684 - 21050 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000192532s
	[INFO] 10.244.0.17:55126 - 48585 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000289276s
	[INFO] 10.244.0.17:37887 - 24204 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011924s
	[INFO] 10.244.0.17:44918 - 57577 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016789s
	[INFO] 10.244.0.17:39323 - 44452 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000083714s
	[INFO] 10.244.0.17:51839 - 47137 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119265s
	[INFO] 10.244.0.17:44219 - 63903 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.004990713s
	[INFO] 10.244.0.17:58390 - 1918 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00526574s
	[INFO] 10.244.0.17:53489 - 45725 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005360269s
	[INFO] 10.244.0.17:39550 - 47890 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00542689s
	[INFO] 10.244.0.17:58456 - 19484 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003728965s
	[INFO] 10.244.0.17:51075 - 984 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.003977552s
	[INFO] 10.244.0.17:57883 - 65389 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000649595s
	[INFO] 10.244.0.17:40623 - 64160 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000713655s
	[INFO] 10.244.0.21:44836 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000238109s
	[INFO] 10.244.0.21:56984 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160174s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-052687
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-052687
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=addons-052687
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T18_26_54_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-052687
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 18:26:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-052687
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 18:31:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 18:29:27 +0000   Mon, 26 Jun 2023 18:26:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 18:29:27 +0000   Mon, 26 Jun 2023 18:26:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 18:29:27 +0000   Mon, 26 Jun 2023 18:26:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 18:29:27 +0000   Mon, 26 Jun 2023 18:27:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-052687
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 49cc31f1054148838b4c9c0fc94d9266
	  System UUID:                029c3010-f7e8-4d3d-b825-fc779b7eb66c
	  Boot ID:                    4f86402f-f9e2-4c4c-a5d0-b2ea258e243c
	  Kernel Version:             5.15.0-1036-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-wvhm8         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  gcp-auth                    gcp-auth-58478865f7-t8lt5                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  headlamp                    headlamp-66f6498c69-pbtjd                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-5d78c9869d-6btv6                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m16s
	  kube-system                 etcd-addons-052687                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m29s
	  kube-system                 kindnet-5jww5                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m16s
	  kube-system                 kube-apiserver-addons-052687             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-controller-manager-addons-052687    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-proxy-222zw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-scheduler-addons-052687             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m12s  kube-proxy       
	  Normal  Starting                 4m29s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m29s  kubelet          Node addons-052687 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m29s  kubelet          Node addons-052687 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m29s  kubelet          Node addons-052687 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m17s  node-controller  Node addons-052687 event: Registered Node addons-052687 in Controller
	  Normal  NodeReady                3m42s  kubelet          Node addons-052687 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1a 82 51 3a 55 fd 08 06
	[Jun26 18:23] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 65 74 7a 3c 6b 08 06
	[  +8.555883] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c2 3b 28 08 b9 1e 08 06
	[  +0.000368] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 22 7e 14 80 33 5f 08 06
	[ +41.874884] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 2c e5 de 8c 82 08 06
	[  +0.000335] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 6e 6c 45 a0 b9 c4 08 06
	[Jun26 18:29] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 86 05 7c b9 a6 20 2a d2 a6 b5 50 64 08 00
	[  +1.023740] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 86 05 7c b9 a6 20 2a d2 a6 b5 50 64 08 00
	[  +2.015800] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 86 05 7c b9 a6 20 2a d2 a6 b5 50 64 08 00
	[  +4.095582] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 86 05 7c b9 a6 20 2a d2 a6 b5 50 64 08 00
	[  +8.191204] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 86 05 7c b9 a6 20 2a d2 a6 b5 50 64 08 00
	[ +16.126380] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 86 05 7c b9 a6 20 2a d2 a6 b5 50 64 08 00
	[Jun26 18:30] IPv4: martian source 10.244.0.18 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 86 05 7c b9 a6 20 2a d2 a6 b5 50 64 08 00
	
	* 
	* ==> etcd [fe0e12094f1f341720f3dfd70bee6ee3eb23cca46cd5a886c71d51b7145623cd] <==
	* {"level":"warn","ts":"2023-06-26T18:27:10.698Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.074241ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-26T18:27:10.698Z","caller":"traceutil/trace.go:171","msg":"trace[1203343296] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:383; }","duration":"102.109535ms","start":"2023-06-26T18:27:10.596Z","end":"2023-06-26T18:27:10.698Z","steps":["trace[1203343296] 'agreement among raft nodes before linearized reading'  (duration: 102.028543ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T18:27:11.007Z","caller":"traceutil/trace.go:171","msg":"trace[145357891] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"113.260085ms","start":"2023-06-26T18:27:10.894Z","end":"2023-06-26T18:27:11.007Z","steps":["trace[145357891] 'process raft request'  (duration: 105.938875ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T18:27:11.008Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.751771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-26T18:27:11.093Z","caller":"traceutil/trace.go:171","msg":"trace[1353065232] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:392; }","duration":"199.240362ms","start":"2023-06-26T18:27:10.893Z","end":"2023-06-26T18:27:11.092Z","steps":["trace[1353065232] 'agreement among raft nodes before linearized reading'  (duration: 114.698523ms)"],"step_count":1}
	{"level":"warn","ts":"2023-06-26T18:27:11.008Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.591561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-06-26T18:27:11.008Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.217167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-06-26T18:27:11.008Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.73032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-06-26T18:27:11.094Z","caller":"traceutil/trace.go:171","msg":"trace[1287136043] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:392; }","duration":"200.320977ms","start":"2023-06-26T18:27:10.893Z","end":"2023-06-26T18:27:11.094Z","steps":["trace[1287136043] 'agreement among raft nodes before linearized reading'  (duration: 114.712823ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T18:27:11.094Z","caller":"traceutil/trace.go:171","msg":"trace[1109785237] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:392; }","duration":"198.626044ms","start":"2023-06-26T18:27:10.895Z","end":"2023-06-26T18:27:11.094Z","steps":["trace[1109785237] 'agreement among raft nodes before linearized reading'  (duration: 112.564784ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T18:27:11.095Z","caller":"traceutil/trace.go:171","msg":"trace[182972439] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:392; }","duration":"199.678663ms","start":"2023-06-26T18:27:10.895Z","end":"2023-06-26T18:27:11.095Z","steps":["trace[182972439] 'agreement among raft nodes before linearized reading'  (duration: 113.188995ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T18:27:56.852Z","caller":"traceutil/trace.go:171","msg":"trace[1529241684] transaction","detail":"{read_only:false; response_revision:901; number_of_response:1; }","duration":"141.2878ms","start":"2023-06-26T18:27:56.711Z","end":"2023-06-26T18:27:56.852Z","steps":["trace[1529241684] 'process raft request'  (duration: 141.063777ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T18:28:18.000Z","caller":"traceutil/trace.go:171","msg":"trace[1063407019] transaction","detail":"{read_only:false; response_revision:1014; number_of_response:1; }","duration":"171.671494ms","start":"2023-06-26T18:28:17.828Z","end":"2023-06-26T18:28:18.000Z","steps":["trace[1063407019] 'process raft request'  (duration: 171.582439ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T18:28:18.000Z","caller":"traceutil/trace.go:171","msg":"trace[2029624263] transaction","detail":"{read_only:false; response_revision:1015; number_of_response:1; }","duration":"171.772521ms","start":"2023-06-26T18:28:17.828Z","end":"2023-06-26T18:28:18.000Z","steps":["trace[2029624263] 'process raft request'  (duration: 171.59937ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T18:28:18.000Z","caller":"traceutil/trace.go:171","msg":"trace[59924504] transaction","detail":"{read_only:false; response_revision:1013; number_of_response:1; }","duration":"172.458132ms","start":"2023-06-26T18:28:17.827Z","end":"2023-06-26T18:28:18.000Z","steps":["trace[59924504] 'process raft request'  (duration: 109.301868ms)","trace[59924504] 'compare'  (duration: 62.967673ms)"],"step_count":2}
	{"level":"info","ts":"2023-06-26T18:28:56.240Z","caller":"traceutil/trace.go:171","msg":"trace[1647422209] linearizableReadLoop","detail":"{readStateIndex:1322; appliedIndex:1321; }","duration":"109.206961ms","start":"2023-06-26T18:28:56.131Z","end":"2023-06-26T18:28:56.240Z","steps":["trace[1647422209] 'read index received'  (duration: 53.445773ms)","trace[1647422209] 'applied index is now lower than readState.Index'  (duration: 55.760571ms)"],"step_count":2}
	{"level":"info","ts":"2023-06-26T18:28:56.240Z","caller":"traceutil/trace.go:171","msg":"trace[871313527] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1273; }","duration":"109.699434ms","start":"2023-06-26T18:28:56.131Z","end":"2023-06-26T18:28:56.240Z","steps":["trace[871313527] 'process raft request'  (duration: 53.881262ms)","trace[871313527] 'compare'  (duration: 55.644155ms)"],"step_count":2}
	{"level":"warn","ts":"2023-06-26T18:28:56.240Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.392441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/gadget-qfbgl\" ","response":"range_response_count:1 size:7518"}
	{"level":"warn","ts":"2023-06-26T18:28:56.240Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.386163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/gadget/gadget-86c774fd66\" ","response":"range_response_count:1 size:6911"}
	{"level":"info","ts":"2023-06-26T18:28:56.240Z","caller":"traceutil/trace.go:171","msg":"trace[4292469] range","detail":"{range_begin:/registry/pods/gadget/gadget-qfbgl; range_end:; response_count:1; response_revision:1273; }","duration":"109.445715ms","start":"2023-06-26T18:28:56.131Z","end":"2023-06-26T18:28:56.240Z","steps":["trace[4292469] 'agreement among raft nodes before linearized reading'  (duration: 109.302799ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T18:28:56.240Z","caller":"traceutil/trace.go:171","msg":"trace[113338398] range","detail":"{range_begin:/registry/controllerrevisions/gadget/gadget-86c774fd66; range_end:; response_count:1; response_revision:1273; }","duration":"109.414318ms","start":"2023-06-26T18:28:56.131Z","end":"2023-06-26T18:28:56.240Z","steps":["trace[113338398] 'agreement among raft nodes before linearized reading'  (duration: 109.311808ms)"],"step_count":1}
	{"level":"info","ts":"2023-06-26T18:28:56.421Z","caller":"traceutil/trace.go:171","msg":"trace[976709554] linearizableReadLoop","detail":"{readStateIndex:1325; appliedIndex:1322; }","duration":"163.817738ms","start":"2023-06-26T18:28:56.257Z","end":"2023-06-26T18:28:56.421Z","steps":["trace[976709554] 'read index received'  (duration: 33.402641ms)","trace[976709554] 'applied index is now lower than readState.Index'  (duration: 130.413917ms)"],"step_count":2}
	{"level":"info","ts":"2023-06-26T18:28:56.421Z","caller":"traceutil/trace.go:171","msg":"trace[125866315] transaction","detail":"{read_only:false; response_revision:1276; number_of_response:1; }","duration":"176.883477ms","start":"2023-06-26T18:28:56.244Z","end":"2023-06-26T18:28:56.421Z","steps":["trace[125866315] 'process raft request'  (duration: 116.541875ms)","trace[125866315] 'compare'  (duration: 60.146722ms)"],"step_count":2}
	{"level":"warn","ts":"2023-06-26T18:28:56.421Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.074104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-06-26T18:28:56.421Z","caller":"traceutil/trace.go:171","msg":"trace[1209871754] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1276; }","duration":"164.165177ms","start":"2023-06-26T18:28:56.257Z","end":"2023-06-26T18:28:56.421Z","steps":["trace[1209871754] 'agreement among raft nodes before linearized reading'  (duration: 164.02108ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [42f402b80e33b272573d93f1778eb332a4ccb426535816cd5145ca2f39070799] <==
	* 2023/06/26 18:28:17 GCP Auth Webhook started!
	2023/06/26 18:28:36 Ready to marshal response ...
	2023/06/26 18:28:36 Ready to write response ...
	2023/06/26 18:28:37 Ready to marshal response ...
	2023/06/26 18:28:37 Ready to write response ...
	2023/06/26 18:28:37 Ready to marshal response ...
	2023/06/26 18:28:37 Ready to write response ...
	2023/06/26 18:28:37 Ready to marshal response ...
	2023/06/26 18:28:37 Ready to write response ...
	2023/06/26 18:28:41 Ready to marshal response ...
	2023/06/26 18:28:41 Ready to write response ...
	2023/06/26 18:28:43 Ready to marshal response ...
	2023/06/26 18:28:43 Ready to write response ...
	2023/06/26 18:29:11 Ready to marshal response ...
	2023/06/26 18:29:11 Ready to write response ...
	2023/06/26 18:29:43 Ready to marshal response ...
	2023/06/26 18:29:43 Ready to write response ...
	2023/06/26 18:31:13 Ready to marshal response ...
	2023/06/26 18:31:13 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:31:23 up  1:13,  0 users,  load average: 1.00, 1.54, 1.78
	Linux addons-052687 5.15.0-1036-gcp #44~20.04.1-Ubuntu SMP Fri Jun 9 10:48:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [1e9b3c3f54a1358579cd1eb8221970ff4a42d83a9066b6473278ca82cd6db41b] <==
	* I0626 18:29:21.058255       1 main.go:227] handling current node
	I0626 18:29:31.062970       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:29:31.062995       1 main.go:227] handling current node
	I0626 18:29:41.072897       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:29:41.072922       1 main.go:227] handling current node
	I0626 18:29:51.076756       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:29:51.076779       1 main.go:227] handling current node
	I0626 18:30:01.088377       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:30:01.088410       1 main.go:227] handling current node
	I0626 18:30:11.092524       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:30:11.092549       1 main.go:227] handling current node
	I0626 18:30:21.096098       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:30:21.096123       1 main.go:227] handling current node
	I0626 18:30:31.100414       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:30:31.100437       1 main.go:227] handling current node
	I0626 18:30:41.104364       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:30:41.104387       1 main.go:227] handling current node
	I0626 18:30:51.107798       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:30:51.107818       1 main.go:227] handling current node
	I0626 18:31:01.120189       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:31:01.120215       1 main.go:227] handling current node
	I0626 18:31:11.124351       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:31:11.124378       1 main.go:227] handling current node
	I0626 18:31:21.136238       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:31:21.136266       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [fda0236f79615647c9088aa0fdfe2753dea95b643c54acde95f97cba8339ea44] <==
	* I0626 18:29:02.784299       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0626 18:29:26.106137       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0626 18:29:59.417087       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 18:29:59.417150       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 18:29:59.422457       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 18:29:59.422513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 18:29:59.434325       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 18:29:59.435135       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 18:29:59.444793       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 18:29:59.444853       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 18:29:59.449567       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 18:29:59.450238       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 18:29:59.458719       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 18:29:59.458756       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0626 18:29:59.493939       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0626 18:29:59.494045       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0626 18:30:00.434722       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0626 18:30:00.458787       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0626 18:30:00.506168       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0626 18:30:02.768707       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0626 18:30:02.768729       1 handler_proxy.go:100] no RequestInfo found in the context
	E0626 18:30:02.768763       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0626 18:30:02.768771       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0626 18:31:14.154111       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.104.37.248]
	
	* 
	* ==> kube-controller-manager [84b8d6c00211f84ce543b16f8a93c0f9b687abaaf628bb08556c121be1bbf2b9] <==
	* E0626 18:30:13.693232       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 18:30:16.315426       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 18:30:16.315460       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 18:30:16.689809       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 18:30:16.689853       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 18:30:18.078419       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 18:30:18.078451       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 18:30:31.535685       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 18:30:31.535721       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 18:30:34.070954       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 18:30:34.070985       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 18:30:39.150774       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 18:30:39.150808       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 18:30:53.831761       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 18:30:53.831797       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 18:31:09.721283       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 18:31:09.721323       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0626 18:31:13.985859       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0626 18:31:13.999791       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-wvhm8"
	I0626 18:31:15.644792       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0626 18:31:15.648900       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0626 18:31:16.315674       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 18:31:16.315705       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0626 18:31:16.802266       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0626 18:31:16.802304       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [d9e2b59e89149cca578a73645d98af2f7bb6b6c22853708766a62fae1087e3a6] <==
	* I0626 18:27:11.101572       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0626 18:27:11.103438       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0626 18:27:11.103576       1 server_others.go:554] "Using iptables proxy"
	I0626 18:27:11.607990       1 server_others.go:192] "Using iptables Proxier"
	I0626 18:27:11.608106       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0626 18:27:11.608148       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0626 18:27:11.608188       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0626 18:27:11.608251       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0626 18:27:11.609036       1 server.go:658] "Version info" version="v1.27.3"
	I0626 18:27:11.609400       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 18:27:11.695297       1 config.go:315] "Starting node config controller"
	I0626 18:27:11.695457       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0626 18:27:11.695905       1 config.go:188] "Starting service config controller"
	I0626 18:27:11.695930       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0626 18:27:11.695988       1 config.go:97] "Starting endpoint slice config controller"
	I0626 18:27:11.695999       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0626 18:27:11.802572       1 shared_informer.go:318] Caches are synced for node config
	I0626 18:27:11.802628       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0626 18:27:11.802657       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [316e14e32ba96ecf0b83c2632b11e9813aa33d22390a1b698ec1342139158b6c] <==
	* W0626 18:26:51.508434       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 18:26:51.508716       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0626 18:26:51.508837       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0626 18:26:51.508841       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 18:26:51.508857       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0626 18:26:51.508892       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 18:26:51.508903       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 18:26:51.508921       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0626 18:26:51.509021       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 18:26:51.509037       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0626 18:26:51.509496       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 18:26:51.509526       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 18:26:52.487111       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 18:26:52.487142       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0626 18:26:52.521002       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 18:26:52.521037       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 18:26:52.522029       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 18:26:52.522055       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0626 18:26:52.572400       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0626 18:26:52.572433       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0626 18:26:52.617885       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 18:26:52.617925       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0626 18:26:52.734873       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 18:26:52.734910       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0626 18:26:54.699752       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 26 18:31:15 addons-052687 kubelet[1555]: I0626 18:31:15.315867    1555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:a22aa581383938caaef11380d8511ed0d32c89f00a00cf70a30dff9a9957630f} err="failed to get container status \"a22aa581383938caaef11380d8511ed0d32c89f00a00cf70a30dff9a9957630f\": rpc error: code = NotFound desc = could not find container \"a22aa581383938caaef11380d8511ed0d32c89f00a00cf70a30dff9a9957630f\": container with ID starting with a22aa581383938caaef11380d8511ed0d32c89f00a00cf70a30dff9a9957630f not found: ID does not exist"
	Jun 26 18:31:15 addons-052687 kubelet[1555]: E0626 18:31:15.694624    1555 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7b4698b8c7-mxmfm.176c490bdcb15033", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7b4698b8c7-mxmfm", UID:"4fb955c3-4911-4d9c-b79c-d2fa0d56d850", APIVersion:"v1", ResourceVersion:"731", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-052687"}, FirstTimestamp:time.Date(2023, time.June, 26, 18, 31, 15, 653169203, time.Local), LastTimestamp:time.Date(2023, time.June, 26, 18, 31, 15, 653169203, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7b4698b8c7-mxmfm.176c490bdcb15033" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 26 18:31:16 addons-052687 kubelet[1555]: I0626 18:31:16.196346    1555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=02a1e097-f596-4f3a-aeda-104ac84e1837 path="/var/lib/kubelet/pods/02a1e097-f596-4f3a-aeda-104ac84e1837/volumes"
	Jun 26 18:31:16 addons-052687 kubelet[1555]: I0626 18:31:16.196846    1555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=21eed3fd-659f-4bfc-b8cc-5e5c8b21053e path="/var/lib/kubelet/pods/21eed3fd-659f-4bfc-b8cc-5e5c8b21053e/volumes"
	Jun 26 18:31:16 addons-052687 kubelet[1555]: I0626 18:31:16.197280    1555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=6dd92797-288f-4583-b8ea-e5ce940c466b path="/var/lib/kubelet/pods/6dd92797-288f-4583-b8ea-e5ce940c466b/volumes"
	Jun 26 18:31:16 addons-052687 kubelet[1555]: W0626 18:31:16.580624    1555 container.go:586] Failed to update stats for container "/crio-0d841c85bc8de5d895e873035f15b2e55db2da02b1768ff694ffee084e794e1c": unable to determine device info for dir: /var/lib/containers/storage/overlay/4103d522bcda626fcaa0bc4e049ba1e9cb77a6da2c0771d6d021f07a5c02db02/diff: stat failed on /var/lib/containers/storage/overlay/4103d522bcda626fcaa0bc4e049ba1e9cb77a6da2c0771d6d021f07a5c02db02/diff with error: no such file or directory, continuing to push stats
	Jun 26 18:31:16 addons-052687 kubelet[1555]: W0626 18:31:16.607978    1555 container.go:586] Failed to update stats for container "/crio-b51abaaf183b94cb03f05a3c1708b6b52db833b22dd3606bd99d429ca45ed5c3": unable to determine device info for dir: /var/lib/containers/storage/overlay/c150bdfb6d7935cc1b4ded5b924f00da5a64d5740697aeab31aa30a46a19f6fe/diff: stat failed on /var/lib/containers/storage/overlay/c150bdfb6d7935cc1b4ded5b924f00da5a64d5740697aeab31aa30a46a19f6fe/diff with error: no such file or directory, continuing to push stats
	Jun 26 18:31:16 addons-052687 kubelet[1555]: W0626 18:31:16.911246    1555 container.go:586] Failed to update stats for container "/crio-4dae8023f39ceb8d1879dad75b1dc1a3d2235475a46a50c15fc5bad806048e50": unable to determine device info for dir: /var/lib/containers/storage/overlay/4c823c393ef31b812bea57e0a011ea03af30cc4f45dacf0c619e807224ab1849/diff: stat failed on /var/lib/containers/storage/overlay/4c823c393ef31b812bea57e0a011ea03af30cc4f45dacf0c619e807224ab1849/diff with error: no such file or directory, continuing to push stats
	Jun 26 18:31:17 addons-052687 kubelet[1555]: I0626 18:31:17.013181    1555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bndbv\" (UniqueName: \"kubernetes.io/projected/4fb955c3-4911-4d9c-b79c-d2fa0d56d850-kube-api-access-bndbv\") pod \"4fb955c3-4911-4d9c-b79c-d2fa0d56d850\" (UID: \"4fb955c3-4911-4d9c-b79c-d2fa0d56d850\") "
	Jun 26 18:31:17 addons-052687 kubelet[1555]: I0626 18:31:17.013232    1555 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fb955c3-4911-4d9c-b79c-d2fa0d56d850-webhook-cert\") pod \"4fb955c3-4911-4d9c-b79c-d2fa0d56d850\" (UID: \"4fb955c3-4911-4d9c-b79c-d2fa0d56d850\") "
	Jun 26 18:31:17 addons-052687 kubelet[1555]: I0626 18:31:17.015068    1555 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fb955c3-4911-4d9c-b79c-d2fa0d56d850-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4fb955c3-4911-4d9c-b79c-d2fa0d56d850" (UID: "4fb955c3-4911-4d9c-b79c-d2fa0d56d850"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 26 18:31:17 addons-052687 kubelet[1555]: I0626 18:31:17.015261    1555 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fb955c3-4911-4d9c-b79c-d2fa0d56d850-kube-api-access-bndbv" (OuterVolumeSpecName: "kube-api-access-bndbv") pod "4fb955c3-4911-4d9c-b79c-d2fa0d56d850" (UID: "4fb955c3-4911-4d9c-b79c-d2fa0d56d850"). InnerVolumeSpecName "kube-api-access-bndbv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 26 18:31:17 addons-052687 kubelet[1555]: I0626 18:31:17.113926    1555 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bndbv\" (UniqueName: \"kubernetes.io/projected/4fb955c3-4911-4d9c-b79c-d2fa0d56d850-kube-api-access-bndbv\") on node \"addons-052687\" DevicePath \"\""
	Jun 26 18:31:17 addons-052687 kubelet[1555]: I0626 18:31:17.113972    1555 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fb955c3-4911-4d9c-b79c-d2fa0d56d850-webhook-cert\") on node \"addons-052687\" DevicePath \"\""
	Jun 26 18:31:17 addons-052687 kubelet[1555]: I0626 18:31:17.304959    1555 scope.go:115] "RemoveContainer" containerID="7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471"
	Jun 26 18:31:17 addons-052687 kubelet[1555]: I0626 18:31:17.315754    1555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-65bdb79f98-wvhm8" podStartSLOduration=2.092146422 podCreationTimestamp="2023-06-26 18:31:13 +0000 UTC" firstStartedPulling="2023-06-26 18:31:14.433357785 +0000 UTC m=+260.394304216" lastFinishedPulling="2023-06-26 18:31:16.65690745 +0000 UTC m=+262.617853880" observedRunningTime="2023-06-26 18:31:17.315460242 +0000 UTC m=+263.276406703" watchObservedRunningTime="2023-06-26 18:31:17.315696086 +0000 UTC m=+263.276642525"
	Jun 26 18:31:17 addons-052687 kubelet[1555]: I0626 18:31:17.323808    1555 scope.go:115] "RemoveContainer" containerID="7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471"
	Jun 26 18:31:17 addons-052687 kubelet[1555]: E0626 18:31:17.324298    1555 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471\": container with ID starting with 7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471 not found: ID does not exist" containerID="7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471"
	Jun 26 18:31:17 addons-052687 kubelet[1555]: I0626 18:31:17.324347    1555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471} err="failed to get container status \"7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471\": rpc error: code = NotFound desc = could not find container \"7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471\": container with ID starting with 7806df4f37cbf2ddf8a16c788678abe03d2b8cb8d693d97b0f65abdd1792a471 not found: ID does not exist"
	Jun 26 18:31:18 addons-052687 kubelet[1555]: I0626 18:31:18.195884    1555 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=4fb955c3-4911-4d9c-b79c-d2fa0d56d850 path="/var/lib/kubelet/pods/4fb955c3-4911-4d9c-b79c-d2fa0d56d850/volumes"
	Jun 26 18:31:21 addons-052687 kubelet[1555]: W0626 18:31:21.314960    1555 container.go:586] Failed to update stats for container "/docker/fe4be2990cc58d40bf4cb72a7ff6539b16c944e7e011e89d013cdeda72bb7060/crio-6afa48fd56be4f357ccd9119fb693b746267cd6c5564cf19de76d48e16dc2945": unable to determine device info for dir: /var/lib/containers/storage/overlay/02577c55bbfc893673be327395764dae6258c27fe452b6ce071471675522da23/diff: stat failed on /var/lib/containers/storage/overlay/02577c55bbfc893673be327395764dae6258c27fe452b6ce071471675522da23/diff with error: no such file or directory, continuing to push stats
	Jun 26 18:31:22 addons-052687 kubelet[1555]: W0626 18:31:22.238120    1555 container.go:586] Failed to update stats for container "/docker/fe4be2990cc58d40bf4cb72a7ff6539b16c944e7e011e89d013cdeda72bb7060/crio-352f122c8d7cf61c484553aedae754df5e10d8aaaec35597f2471e503553338f": unable to determine device info for dir: /var/lib/containers/storage/overlay/940d9d3d238c66b3f1e8f8bdabeb6d55b2d4f383244a157dc1918aa0d14961bf/diff: stat failed on /var/lib/containers/storage/overlay/940d9d3d238c66b3f1e8f8bdabeb6d55b2d4f383244a157dc1918aa0d14961bf/diff with error: no such file or directory, continuing to push stats
	Jun 26 18:31:22 addons-052687 kubelet[1555]: W0626 18:31:22.505466    1555 container.go:586] Failed to update stats for container "/docker/fe4be2990cc58d40bf4cb72a7ff6539b16c944e7e011e89d013cdeda72bb7060/crio-4e14561face2fb7aaa3a5b137d48e9b1a50eaf86ae6f7ea69d330dae91ea2078": unable to determine device info for dir: /var/lib/containers/storage/overlay/bd625246dd32e14847f6f456e12db825b5764d0383df150e4d497a933db6be9a/diff: stat failed on /var/lib/containers/storage/overlay/bd625246dd32e14847f6f456e12db825b5764d0383df150e4d497a933db6be9a/diff with error: no such file or directory, continuing to push stats
	Jun 26 18:31:23 addons-052687 kubelet[1555]: W0626 18:31:23.491885    1555 container.go:586] Failed to update stats for container "/docker/fe4be2990cc58d40bf4cb72a7ff6539b16c944e7e011e89d013cdeda72bb7060/crio-08dac47e53dac7a7828dc2f74bfb41438141ee02f02c71b6dadceddbc7e01cc2": unable to determine device info for dir: /var/lib/containers/storage/overlay/16edfd616c36f3908cf57bfa9098b3c1f9a4d233c548fdbb6f335d25145bded5/diff: stat failed on /var/lib/containers/storage/overlay/16edfd616c36f3908cf57bfa9098b3c1f9a4d233c548fdbb6f335d25145bded5/diff with error: no such file or directory, continuing to push stats
	Jun 26 18:31:23 addons-052687 kubelet[1555]: W0626 18:31:23.563455    1555 container.go:586] Failed to update stats for container "/crio-6afa48fd56be4f357ccd9119fb693b746267cd6c5564cf19de76d48e16dc2945": unable to determine device info for dir: /var/lib/containers/storage/overlay/02577c55bbfc893673be327395764dae6258c27fe452b6ce071471675522da23/diff: stat failed on /var/lib/containers/storage/overlay/02577c55bbfc893673be327395764dae6258c27fe452b6ce071471675522da23/diff with error: no such file or directory, continuing to push stats
	
	* 
	* ==> storage-provisioner [97624b14376bea9959bd02dae8e13610611d09ac6b5efb13b26b2f4b2cf5b3b6] <==
	* I0626 18:27:42.120593       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 18:27:42.128999       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 18:27:42.129066       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 18:27:42.137405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 18:27:42.137553       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-052687_f0234c20-4b53-4c0d-909b-8c47c3f457a5!
	I0626 18:27:42.138052       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c6b866c-cfa2-4140-8d93-bca89d7035a0", APIVersion:"v1", ResourceVersion:"809", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-052687_f0234c20-4b53-4c0d-909b-8c47c3f457a5 became leader
	I0626 18:27:42.238056       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-052687_f0234c20-4b53-4c0d-909b-8c47c3f457a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-052687 -n addons-052687
helpers_test.go:261: (dbg) Run:  kubectl --context addons-052687 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (161.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.863097177s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-900227
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image load --daemon gcr.io/google-containers/addon-resizer:functional-900227 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 image load --daemon gcr.io/google-containers/addon-resizer:functional-900227 --alsologtostderr: (8.137570487s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 image ls: (2.226738148s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-900227" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.25s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (179.36s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-022189 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-022189 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.145637018s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-022189 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-022189 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [505fb212-8050-494f-8fb0-86c4c2a8fb65] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [505fb212-8050-494f-8fb0-86c4c2a8fb65] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.008478346s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-022189 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0626 18:38:31.373227  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:38:59.059789  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:40:10.381132  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:40:10.386465  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:40:10.396774  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:40:10.417033  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:40:10.457376  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:40:10.537747  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:40:10.698197  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:40:11.018754  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:40:11.659863  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:40:12.940348  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:40:15.501008  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:40:20.621349  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-022189 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.202134794s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-022189 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-022189 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0626 18:40:30.862571  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.005112865s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-022189 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-022189 addons disable ingress-dns --alsologtostderr -v=1: (1.202229617s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-022189 addons disable ingress --alsologtostderr -v=1
E0626 18:40:51.343065  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-022189 addons disable ingress --alsologtostderr -v=1: (7.22702507s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-022189
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-022189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a56405f3b6c380134d8c160059ff65aab3291593447d099e9bcf210691d1b79",
	        "Created": "2023-06-26T18:36:28.084992841Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 376386,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-26T18:36:28.406142849Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:42a2b4e0d52aa58abe36e9abb680d93c11444dcb07814b595a45d2fa0f8a777c",
	        "ResolvConfPath": "/var/lib/docker/containers/9a56405f3b6c380134d8c160059ff65aab3291593447d099e9bcf210691d1b79/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a56405f3b6c380134d8c160059ff65aab3291593447d099e9bcf210691d1b79/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a56405f3b6c380134d8c160059ff65aab3291593447d099e9bcf210691d1b79/hosts",
	        "LogPath": "/var/lib/docker/containers/9a56405f3b6c380134d8c160059ff65aab3291593447d099e9bcf210691d1b79/9a56405f3b6c380134d8c160059ff65aab3291593447d099e9bcf210691d1b79-json.log",
	        "Name": "/ingress-addon-legacy-022189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-022189:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-022189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/06fa580a04079e23e8e0df5f659fcf573ae25c037208c0021f188c4a0af13ca6-init/diff:/var/lib/docker/overlay2/8f9a4266fd693ed66b9874436fe49dcae15615f8bcd132a5a8e8ba2403f6ef40/diff",
	                "MergedDir": "/var/lib/docker/overlay2/06fa580a04079e23e8e0df5f659fcf573ae25c037208c0021f188c4a0af13ca6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/06fa580a04079e23e8e0df5f659fcf573ae25c037208c0021f188c4a0af13ca6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/06fa580a04079e23e8e0df5f659fcf573ae25c037208c0021f188c4a0af13ca6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-022189",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-022189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-022189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-022189",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-022189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edc13adc80fed29679f7691ab64e723ee423496f3ee9b34456a28868f43bb3de",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/edc13adc80fe",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-022189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9a56405f3b6c",
	                        "ingress-addon-legacy-022189"
	                    ],
	                    "NetworkID": "e2df93848731375b184fbbf040433bf24ef16b10faaf2d25e8eac37e8b207ec2",
	                    "EndpointID": "06aa896962b12fc23d2d8b15d44e3bb426ffb84878328d70124d99fd09765879",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-022189 -n ingress-addon-legacy-022189
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-022189 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-022189 logs -n 25: (1.043861979s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|---------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                Args                                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|-----------|---------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| addons    | functional-900227 addons list                                       | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	|           | -o json                                                             |                             |         |         |                     |                     |
	| service   | functional-900227 service list                                      | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	| service   | functional-900227 service list                                      | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	|           | -o json                                                             |                             |         |         |                     |                     |
	| service   | functional-900227 service                                           | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	|           | --namespace=default --https                                         |                             |         |         |                     |                     |
	|           | --url hello-node                                                    |                             |         |         |                     |                     |
	| service   | functional-900227                                                   | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	|           | service hello-node --url                                            |                             |         |         |                     |                     |
	|           | --format={{.IP}}                                                    |                             |         |         |                     |                     |
	| service   | functional-900227 service                                           | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	|           | hello-node --url                                                    |                             |         |         |                     |                     |
	| ssh       | functional-900227 ssh findmnt                                       | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC |                     |
	|           | -T /mount-9p | grep 9p                                              |                             |         |         |                     |                     |
	| mount     | -p functional-900227                                                | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdany-port2304509798/001:/mount-9p |                             |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                              |                             |         |         |                     |                     |
	| service   | functional-900227 service                                           | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	|           | hello-node-connect --url                                            |                             |         |         |                     |                     |
	| start     | -p functional-900227                                                | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC |                     |
	|           | --dry-run --memory                                                  |                             |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                             |                             |         |         |                     |                     |
	|           | --driver=docker                                                     |                             |         |         |                     |                     |
	|           | --container-runtime=crio                                            |                             |         |         |                     |                     |
	| dashboard | --url --port 36195                                                  | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	|           | -p functional-900227                                                |                             |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                              |                             |         |         |                     |                     |
	| start     | -p functional-900227                                                | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC |                     |
	|           | --dry-run --alsologtostderr                                         |                             |         |         |                     |                     |
	|           | -v=1 --driver=docker                                                |                             |         |         |                     |                     |
	|           | --container-runtime=crio                                            |                             |         |         |                     |                     |
	| ssh       | functional-900227 ssh findmnt                                       | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	|           | -T /mount-9p | grep 9p                                              |                             |         |         |                     |                     |
	| start     | -p functional-900227                                                | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC |                     |
	|           | --dry-run --memory                                                  |                             |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                             |                             |         |         |                     |                     |
	|           | --driver=docker                                                     |                             |         |         |                     |                     |
	|           | --container-runtime=crio                                            |                             |         |         |                     |                     |
	| ssh       | functional-900227 ssh sudo cat                                      | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	|           | /etc/ssl/certs/336935.pem                                           |                             |         |         |                     |                     |
	| ssh       | functional-900227 ssh -- ls                                         | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	|           | -la /mount-9p                                                       |                             |         |         |                     |                     |
	| image     | functional-900227 image ls                                          | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:35 UTC | 26 Jun 23 18:35 UTC |
	| delete    | -p functional-900227                                                | functional-900227           | jenkins | v1.30.1 | 26 Jun 23 18:36 UTC | 26 Jun 23 18:36 UTC |
	| start     | -p ingress-addon-legacy-022189                                      | ingress-addon-legacy-022189 | jenkins | v1.30.1 | 26 Jun 23 18:36 UTC | 26 Jun 23 18:37 UTC |
	|           | --kubernetes-version=v1.18.20                                       |                             |         |         |                     |                     |
	|           | --memory=4096 --wait=true                                           |                             |         |         |                     |                     |
	|           | --alsologtostderr                                                   |                             |         |         |                     |                     |
	|           | -v=5 --driver=docker                                                |                             |         |         |                     |                     |
	|           | --container-runtime=crio                                            |                             |         |         |                     |                     |
	| addons    | ingress-addon-legacy-022189                                         | ingress-addon-legacy-022189 | jenkins | v1.30.1 | 26 Jun 23 18:37 UTC | 26 Jun 23 18:37 UTC |
	|           | addons enable ingress                                               |                             |         |         |                     |                     |
	|           | --alsologtostderr -v=5                                              |                             |         |         |                     |                     |
	| addons    | ingress-addon-legacy-022189                                         | ingress-addon-legacy-022189 | jenkins | v1.30.1 | 26 Jun 23 18:37 UTC | 26 Jun 23 18:37 UTC |
	|           | addons enable ingress-dns                                           |                             |         |         |                     |                     |
	|           | --alsologtostderr -v=5                                              |                             |         |         |                     |                     |
	| ssh       | ingress-addon-legacy-022189                                         | ingress-addon-legacy-022189 | jenkins | v1.30.1 | 26 Jun 23 18:38 UTC |                     |
	|           | ssh curl -s http://127.0.0.1/                                       |                             |         |         |                     |                     |
	|           | -H 'Host: nginx.example.com'                                        |                             |         |         |                     |                     |
	| ip        | ingress-addon-legacy-022189 ip                                      | ingress-addon-legacy-022189 | jenkins | v1.30.1 | 26 Jun 23 18:40 UTC | 26 Jun 23 18:40 UTC |
	| addons    | ingress-addon-legacy-022189                                         | ingress-addon-legacy-022189 | jenkins | v1.30.1 | 26 Jun 23 18:40 UTC | 26 Jun 23 18:40 UTC |
	|           | addons disable ingress-dns                                          |                             |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                              |                             |         |         |                     |                     |
	| addons    | ingress-addon-legacy-022189                                         | ingress-addon-legacy-022189 | jenkins | v1.30.1 | 26 Jun 23 18:40 UTC | 26 Jun 23 18:40 UTC |
	|           | addons disable ingress                                              |                             |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                              |                             |         |         |                     |                     |
	|-----------|---------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 18:36:05
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 18:36:05.589931  375694 out.go:296] Setting OutFile to fd 1 ...
	I0626 18:36:05.590077  375694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:36:05.590086  375694 out.go:309] Setting ErrFile to fd 2...
	I0626 18:36:05.590090  375694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:36:05.590204  375694 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	I0626 18:36:05.590795  375694 out.go:303] Setting JSON to false
	I0626 18:36:05.591794  375694 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4716,"bootTime":1687799850,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 18:36:05.591854  375694 start.go:137] virtualization: kvm guest
	I0626 18:36:05.594168  375694 out.go:177] * [ingress-addon-legacy-022189] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 18:36:05.595540  375694 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 18:36:05.595593  375694 notify.go:220] Checking for updates...
	I0626 18:36:05.597008  375694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 18:36:05.598401  375694 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:36:05.599783  375694 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	I0626 18:36:05.601028  375694 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 18:36:05.602273  375694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 18:36:05.603781  375694 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 18:36:05.625819  375694 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0626 18:36:05.625921  375694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:36:05.671973  375694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-06-26 18:36:05.663827903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:36:05.672083  375694 docker.go:294] overlay module found
	I0626 18:36:05.673942  375694 out.go:177] * Using the docker driver based on user configuration
	I0626 18:36:05.675266  375694 start.go:297] selected driver: docker
	I0626 18:36:05.675282  375694 start.go:954] validating driver "docker" against <nil>
	I0626 18:36:05.675296  375694 start.go:965] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 18:36:05.676108  375694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:36:05.722636  375694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-06-26 18:36:05.713715325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:36:05.722794  375694 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0626 18:36:05.723023  375694 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 18:36:05.725061  375694 out.go:177] * Using Docker driver with root privileges
	I0626 18:36:05.726251  375694 cni.go:84] Creating CNI manager for ""
	I0626 18:36:05.726264  375694 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0626 18:36:05.726275  375694 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0626 18:36:05.726285  375694 start_flags.go:319] config:
	{Name:ingress-addon-legacy-022189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:36:05.727715  375694 out.go:177] * Starting control plane node ingress-addon-legacy-022189 in cluster ingress-addon-legacy-022189
	I0626 18:36:05.729014  375694 cache.go:122] Beginning downloading kic base image for docker with crio
	I0626 18:36:05.730195  375694 out.go:177] * Pulling base image ...
	I0626 18:36:05.731405  375694 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0626 18:36:05.731509  375694 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon
	I0626 18:36:05.747206  375694 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon, skipping pull
	I0626 18:36:05.747247  375694 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 exists in daemon, skipping load
	I0626 18:36:06.104881  375694 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0626 18:36:06.104911  375694 cache.go:57] Caching tarball of preloaded images
	I0626 18:36:06.105095  375694 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0626 18:36:06.107149  375694 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0626 18:36:06.108614  375694 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0626 18:36:06.212066  375694 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0626 18:36:19.949417  375694 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0626 18:36:19.949523  375694 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0626 18:36:20.897083  375694 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0626 18:36:20.897493  375694 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/config.json ...
	I0626 18:36:20.897530  375694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/config.json: {Name:mk1568ac81e0c71960579c3d3cd8914a8cd07b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:36:20.897692  375694 cache.go:195] Successfully downloaded all kic artifacts
	I0626 18:36:20.897715  375694 start.go:365] acquiring machines lock for ingress-addon-legacy-022189: {Name:mk989a6285e793c346230605fb3496df1d7f2960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:36:20.897755  375694 start.go:369] acquired machines lock for "ingress-addon-legacy-022189" in 27.784µs
	I0626 18:36:20.897773  375694 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-022189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022189 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 18:36:20.897852  375694 start.go:125] createHost starting for "" (driver="docker")
	I0626 18:36:20.899976  375694 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0626 18:36:20.900216  375694 start.go:159] libmachine.API.Create for "ingress-addon-legacy-022189" (driver="docker")
	I0626 18:36:20.900240  375694 client.go:168] LocalClient.Create starting
	I0626 18:36:20.900324  375694 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem
	I0626 18:36:20.900355  375694 main.go:141] libmachine: Decoding PEM data...
	I0626 18:36:20.900372  375694 main.go:141] libmachine: Parsing certificate...
	I0626 18:36:20.900434  375694 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem
	I0626 18:36:20.900456  375694 main.go:141] libmachine: Decoding PEM data...
	I0626 18:36:20.900470  375694 main.go:141] libmachine: Parsing certificate...
	I0626 18:36:20.900767  375694 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-022189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0626 18:36:20.916217  375694 cli_runner.go:211] docker network inspect ingress-addon-legacy-022189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0626 18:36:20.916283  375694 network_create.go:281] running [docker network inspect ingress-addon-legacy-022189] to gather additional debugging logs...
	I0626 18:36:20.916302  375694 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-022189
	W0626 18:36:20.931242  375694 cli_runner.go:211] docker network inspect ingress-addon-legacy-022189 returned with exit code 1
	I0626 18:36:20.931277  375694 network_create.go:284] error running [docker network inspect ingress-addon-legacy-022189]: docker network inspect ingress-addon-legacy-022189: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-022189 not found
	I0626 18:36:20.931301  375694 network_create.go:286] output of [docker network inspect ingress-addon-legacy-022189]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-022189 not found
	
	** /stderr **
	I0626 18:36:20.931352  375694 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0626 18:36:20.947097  375694 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0011607e0}
	I0626 18:36:20.947145  375694 network_create.go:123] attempt to create docker network ingress-addon-legacy-022189 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0626 18:36:20.947192  375694 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-022189 ingress-addon-legacy-022189
	I0626 18:36:20.997570  375694 network_create.go:107] docker network ingress-addon-legacy-022189 192.168.49.0/24 created
	I0626 18:36:20.997603  375694 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-022189" container
	I0626 18:36:20.997658  375694 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0626 18:36:21.012044  375694 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-022189 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-022189 --label created_by.minikube.sigs.k8s.io=true
	I0626 18:36:21.029536  375694 oci.go:103] Successfully created a docker volume ingress-addon-legacy-022189
	I0626 18:36:21.029629  375694 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-022189-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-022189 --entrypoint /usr/bin/test -v ingress-addon-legacy-022189:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -d /var/lib
	I0626 18:36:22.733464  375694 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-022189-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-022189 --entrypoint /usr/bin/test -v ingress-addon-legacy-022189:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -d /var/lib: (1.703786883s)
	I0626 18:36:22.733499  375694 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-022189
	I0626 18:36:22.733523  375694 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0626 18:36:22.733551  375694 kic.go:190] Starting extracting preloaded images to volume ...
	I0626 18:36:22.733613  375694 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-022189:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -I lz4 -xf /preloaded.tar -C /extractDir
	I0626 18:36:28.021560  375694 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-022189:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -I lz4 -xf /preloaded.tar -C /extractDir: (5.287900439s)
	I0626 18:36:28.021597  375694 kic.go:199] duration metric: took 5.288041 seconds to extract preloaded images to volume
	W0626 18:36:28.021749  375694 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0626 18:36:28.021891  375694 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0626 18:36:28.070903  375694 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-022189 --name ingress-addon-legacy-022189 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-022189 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-022189 --network ingress-addon-legacy-022189 --ip 192.168.49.2 --volume ingress-addon-legacy-022189:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953
	I0626 18:36:28.414256  375694 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022189 --format={{.State.Running}}
	I0626 18:36:28.432715  375694 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022189 --format={{.State.Status}}
	I0626 18:36:28.450272  375694 cli_runner.go:164] Run: docker exec ingress-addon-legacy-022189 stat /var/lib/dpkg/alternatives/iptables
	I0626 18:36:28.489054  375694 oci.go:144] the created container "ingress-addon-legacy-022189" has a running status.
	I0626 18:36:28.489090  375694 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/ingress-addon-legacy-022189/id_rsa...
	I0626 18:36:28.655507  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/ingress-addon-legacy-022189/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0626 18:36:28.655561  375694 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16761-330054/.minikube/machines/ingress-addon-legacy-022189/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0626 18:36:28.677367  375694 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022189 --format={{.State.Status}}
	I0626 18:36:28.694760  375694 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0626 18:36:28.694787  375694 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-022189 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0626 18:36:28.754555  375694 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022189 --format={{.State.Status}}
	I0626 18:36:28.775943  375694 machine.go:88] provisioning docker machine ...
	I0626 18:36:28.776006  375694 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-022189"
	I0626 18:36:28.776061  375694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022189
	I0626 18:36:28.792066  375694 main.go:141] libmachine: Using SSH client type: native
	I0626 18:36:28.792756  375694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0626 18:36:28.792781  375694 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-022189 && echo "ingress-addon-legacy-022189" | sudo tee /etc/hostname
	I0626 18:36:28.793451  375694 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37460->127.0.0.1:33099: read: connection reset by peer
	I0626 18:36:31.937057  375694 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-022189
	
	I0626 18:36:31.937127  375694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022189
	I0626 18:36:31.952416  375694 main.go:141] libmachine: Using SSH client type: native
	I0626 18:36:31.952833  375694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0626 18:36:31.952853  375694 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-022189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-022189/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-022189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 18:36:32.081094  375694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 18:36:32.081131  375694 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16761-330054/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-330054/.minikube}
	I0626 18:36:32.081163  375694 ubuntu.go:177] setting up certificates
	I0626 18:36:32.081177  375694 provision.go:83] configureAuth start
	I0626 18:36:32.081225  375694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-022189
	I0626 18:36:32.098291  375694 provision.go:138] copyHostCerts
	I0626 18:36:32.098335  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem
	I0626 18:36:32.098363  375694 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem, removing ...
	I0626 18:36:32.098374  375694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem
	I0626 18:36:32.098450  375694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem (1123 bytes)
	I0626 18:36:32.098550  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem
	I0626 18:36:32.098574  375694 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem, removing ...
	I0626 18:36:32.098583  375694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem
	I0626 18:36:32.098627  375694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem (1679 bytes)
	I0626 18:36:32.098687  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem
	I0626 18:36:32.098711  375694 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem, removing ...
	I0626 18:36:32.098720  375694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem
	I0626 18:36:32.098749  375694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem (1082 bytes)
	I0626 18:36:32.098878  375694 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-022189 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-022189]
	I0626 18:36:32.243450  375694 provision.go:172] copyRemoteCerts
	I0626 18:36:32.243513  375694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 18:36:32.243549  375694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022189
	I0626 18:36:32.259610  375694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/ingress-addon-legacy-022189/id_rsa Username:docker}
	I0626 18:36:32.353198  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0626 18:36:32.353259  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0626 18:36:32.374634  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0626 18:36:32.374696  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0626 18:36:32.395300  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0626 18:36:32.395363  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 18:36:32.416172  375694 provision.go:86] duration metric: configureAuth took 334.974476ms
	I0626 18:36:32.416198  375694 ubuntu.go:193] setting minikube options for container-runtime
	I0626 18:36:32.416349  375694 config.go:182] Loaded profile config "ingress-addon-legacy-022189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0626 18:36:32.416469  375694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022189
	I0626 18:36:32.433400  375694 main.go:141] libmachine: Using SSH client type: native
	I0626 18:36:32.433784  375694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I0626 18:36:32.433800  375694 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 18:36:32.669827  375694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 18:36:32.669857  375694 machine.go:91] provisioned docker machine in 3.89388955s
	I0626 18:36:32.669869  375694 client.go:171] LocalClient.Create took 11.769623024s
	I0626 18:36:32.669893  375694 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-022189" took 11.769677368s
	I0626 18:36:32.669907  375694 start.go:300] post-start starting for "ingress-addon-legacy-022189" (driver="docker")
	I0626 18:36:32.669923  375694 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 18:36:32.669988  375694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 18:36:32.670055  375694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022189
	I0626 18:36:32.685951  375694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/ingress-addon-legacy-022189/id_rsa Username:docker}
	I0626 18:36:32.777494  375694 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 18:36:32.780374  375694 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0626 18:36:32.780403  375694 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0626 18:36:32.780412  375694 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0626 18:36:32.780420  375694 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0626 18:36:32.780431  375694 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/addons for local assets ...
	I0626 18:36:32.780489  375694 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/files for local assets ...
	I0626 18:36:32.780581  375694 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem -> 3369352.pem in /etc/ssl/certs
	I0626 18:36:32.780594  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem -> /etc/ssl/certs/3369352.pem
	I0626 18:36:32.780708  375694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 18:36:32.788241  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem --> /etc/ssl/certs/3369352.pem (1708 bytes)
	I0626 18:36:32.809163  375694 start.go:303] post-start completed in 139.237134ms
	I0626 18:36:32.809486  375694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-022189
	I0626 18:36:32.825472  375694 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/config.json ...
	I0626 18:36:32.825766  375694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0626 18:36:32.825817  375694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022189
	I0626 18:36:32.841481  375694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/ingress-addon-legacy-022189/id_rsa Username:docker}
	I0626 18:36:32.929566  375694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0626 18:36:32.933554  375694 start.go:128] duration metric: createHost completed in 12.035690547s
	I0626 18:36:32.933575  375694 start.go:83] releasing machines lock for "ingress-addon-legacy-022189", held for 12.035810862s
	I0626 18:36:32.933659  375694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-022189
	I0626 18:36:32.948938  375694 ssh_runner.go:195] Run: cat /version.json
	I0626 18:36:32.948982  375694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022189
	I0626 18:36:32.949011  375694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 18:36:32.949103  375694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022189
	I0626 18:36:32.965486  375694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/ingress-addon-legacy-022189/id_rsa Username:docker}
	I0626 18:36:32.965895  375694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/ingress-addon-legacy-022189/id_rsa Username:docker}
	I0626 18:36:33.149896  375694 ssh_runner.go:195] Run: systemctl --version
	I0626 18:36:33.154145  375694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 18:36:33.290307  375694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0626 18:36:33.294788  375694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 18:36:33.312549  375694 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0626 18:36:33.312628  375694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 18:36:33.338714  375694 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0626 18:36:33.338744  375694 start.go:466] detecting cgroup driver to use...
	I0626 18:36:33.338784  375694 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0626 18:36:33.338838  375694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 18:36:33.352585  375694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 18:36:33.362419  375694 docker.go:196] disabling cri-docker service (if available) ...
	I0626 18:36:33.362465  375694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 18:36:33.374058  375694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 18:36:33.385871  375694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 18:36:33.456169  375694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 18:36:33.532592  375694 docker.go:212] disabling docker service ...
	I0626 18:36:33.532655  375694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 18:36:33.549656  375694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 18:36:33.559794  375694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 18:36:33.639874  375694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 18:36:33.720554  375694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 18:36:33.730742  375694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 18:36:33.744499  375694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0626 18:36:33.744549  375694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:36:33.753147  375694 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 18:36:33.753210  375694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:36:33.761683  375694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:36:33.769925  375694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:36:33.778394  375694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 18:36:33.786169  375694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 18:36:33.793187  375694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 18:36:33.800290  375694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 18:36:33.875362  375694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 18:36:33.973184  375694 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 18:36:33.973269  375694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 18:36:33.976772  375694 start.go:534] Will wait 60s for crictl version
	I0626 18:36:33.976815  375694 ssh_runner.go:195] Run: which crictl
	I0626 18:36:33.979724  375694 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 18:36:34.010784  375694 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0626 18:36:34.010867  375694 ssh_runner.go:195] Run: crio --version
	I0626 18:36:34.043079  375694 ssh_runner.go:195] Run: crio --version
	I0626 18:36:34.077920  375694 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0626 18:36:34.079542  375694 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-022189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0626 18:36:34.095052  375694 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0626 18:36:34.098566  375694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 18:36:34.108595  375694 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0626 18:36:34.108643  375694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 18:36:34.150878  375694 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0626 18:36:34.150945  375694 ssh_runner.go:195] Run: which lz4
	I0626 18:36:34.154262  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0626 18:36:34.154341  375694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0626 18:36:34.157517  375694 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0626 18:36:34.157545  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0626 18:36:35.066795  375694 crio.go:444] Took 0.912476 seconds to copy over tarball
	I0626 18:36:35.066856  375694 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0626 18:36:37.628920  375694 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.562027923s)
	I0626 18:36:37.628952  375694 crio.go:451] Took 2.562133 seconds to extract the tarball
	I0626 18:36:37.628961  375694 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0626 18:36:37.700841  375694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 18:36:37.732280  375694 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0626 18:36:37.732309  375694 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0626 18:36:37.732388  375694 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 18:36:37.732415  375694 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0626 18:36:37.732469  375694 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0626 18:36:37.732423  375694 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0626 18:36:37.732559  375694 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0626 18:36:37.732481  375694 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0626 18:36:37.732427  375694 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0626 18:36:37.732390  375694 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0626 18:36:37.733517  375694 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0626 18:36:37.733521  375694 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0626 18:36:37.733525  375694 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0626 18:36:37.733537  375694 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0626 18:36:37.733513  375694 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 18:36:37.733573  375694 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0626 18:36:37.733581  375694 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0626 18:36:37.733615  375694 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0626 18:36:37.945598  375694 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0626 18:36:37.981158  375694 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0626 18:36:37.981207  375694 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0626 18:36:37.981240  375694 ssh_runner.go:195] Run: which crictl
	I0626 18:36:37.984564  375694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0626 18:36:37.987912  375694 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0626 18:36:38.005199  375694 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0626 18:36:38.019154  375694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0626 18:36:38.025807  375694 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0626 18:36:38.025854  375694 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0626 18:36:38.025904  375694 ssh_runner.go:195] Run: which crictl
	I0626 18:36:38.042399  375694 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0626 18:36:38.042443  375694 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0626 18:36:38.042456  375694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0626 18:36:38.042475  375694 ssh_runner.go:195] Run: which crictl
	I0626 18:36:38.073385  375694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0626 18:36:38.073469  375694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0626 18:36:38.104986  375694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0626 18:36:38.105627  375694 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0626 18:36:38.107644  375694 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0626 18:36:38.116024  375694 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0626 18:36:38.119001  375694 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0626 18:36:38.193512  375694 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0626 18:36:38.193556  375694 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0626 18:36:38.193610  375694 ssh_runner.go:195] Run: which crictl
	I0626 18:36:38.194955  375694 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0626 18:36:38.195002  375694 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0626 18:36:38.195039  375694 ssh_runner.go:195] Run: which crictl
	I0626 18:36:38.195175  375694 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0626 18:36:38.195218  375694 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0626 18:36:38.195263  375694 ssh_runner.go:195] Run: which crictl
	I0626 18:36:38.205044  375694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0626 18:36:38.205089  375694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0626 18:36:38.205107  375694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0626 18:36:38.205051  375694 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0626 18:36:38.205148  375694 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0626 18:36:38.205182  375694 ssh_runner.go:195] Run: which crictl
	I0626 18:36:38.240572  375694 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0626 18:36:38.240711  375694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0626 18:36:38.244644  375694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0626 18:36:38.292676  375694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0626 18:36:38.316791  375694 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0626 18:36:38.972856  375694 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 18:36:39.108619  375694 cache_images.go:92] LoadImages completed in 1.376289924s
	W0626 18:36:39.108721  375694 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0626 18:36:39.108804  375694 ssh_runner.go:195] Run: crio config
	I0626 18:36:39.149681  375694 cni.go:84] Creating CNI manager for ""
	I0626 18:36:39.149706  375694 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0626 18:36:39.149719  375694 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 18:36:39.149741  375694 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-022189 NodeName:ingress-addon-legacy-022189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0626 18:36:39.149925  375694 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-022189"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 18:36:39.150018  375694 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-022189 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022189 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 18:36:39.150082  375694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0626 18:36:39.158464  375694 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 18:36:39.158544  375694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 18:36:39.166954  375694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0626 18:36:39.182884  375694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0626 18:36:39.198579  375694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0626 18:36:39.213973  375694 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0626 18:36:39.217228  375694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 18:36:39.226950  375694 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189 for IP: 192.168.49.2
	I0626 18:36:39.226983  375694 certs.go:190] acquiring lock for shared ca certs: {Name:mk5dcd9e05f1fa507f67df494d102e50ef2554ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:36:39.227136  375694 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key
	I0626 18:36:39.227177  375694 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key
	I0626 18:36:39.227218  375694 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.key
	I0626 18:36:39.227230  375694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt with IP's: []
	I0626 18:36:39.323399  375694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt ...
	I0626 18:36:39.323433  375694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: {Name:mk855e4c9bc784b517249f68ea277c889afeb4fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:36:39.323609  375694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.key ...
	I0626 18:36:39.323620  375694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.key: {Name:mk97e6c9c55d360ec812add8cfb94eb2806c9c87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:36:39.323695  375694 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.key.dd3b5fb2
	I0626 18:36:39.323714  375694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0626 18:36:39.670461  375694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.crt.dd3b5fb2 ...
	I0626 18:36:39.670500  375694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.crt.dd3b5fb2: {Name:mkaf6b5bd2f24261c923a12b29a73316366d21fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:36:39.670669  375694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.key.dd3b5fb2 ...
	I0626 18:36:39.670680  375694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.key.dd3b5fb2: {Name:mk0b4a471a0c1d0f72d45bd636aabd3f72e2b50c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:36:39.670740  375694 certs.go:337] copying /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.crt
	I0626 18:36:39.670799  375694 certs.go:341] copying /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.key
	I0626 18:36:39.670852  375694 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/proxy-client.key
	I0626 18:36:39.670871  375694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/proxy-client.crt with IP's: []
	I0626 18:36:39.716402  375694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/proxy-client.crt ...
	I0626 18:36:39.716435  375694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/proxy-client.crt: {Name:mk98e075ae9f220ed9aec348f742c27967afb282 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:36:39.716598  375694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/proxy-client.key ...
	I0626 18:36:39.716608  375694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/proxy-client.key: {Name:mk9111e28b727f7eae5796ed77c62be12d948044 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:36:39.716683  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0626 18:36:39.716703  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0626 18:36:39.716719  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0626 18:36:39.716731  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0626 18:36:39.716747  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0626 18:36:39.716759  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0626 18:36:39.716771  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0626 18:36:39.716781  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0626 18:36:39.716830  375694 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935.pem (1338 bytes)
	W0626 18:36:39.716879  375694 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935_empty.pem, impossibly tiny 0 bytes
	I0626 18:36:39.716890  375694 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 18:36:39.716914  375694 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem (1082 bytes)
	I0626 18:36:39.716932  375694 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem (1123 bytes)
	I0626 18:36:39.716955  375694 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem (1679 bytes)
	I0626 18:36:39.716997  375694 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem (1708 bytes)
	I0626 18:36:39.717029  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:36:39.717044  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935.pem -> /usr/share/ca-certificates/336935.pem
	I0626 18:36:39.717055  375694 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem -> /usr/share/ca-certificates/3369352.pem
	I0626 18:36:39.717668  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 18:36:39.739282  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0626 18:36:39.759276  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 18:36:39.779795  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0626 18:36:39.800773  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 18:36:39.821839  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 18:36:39.842547  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 18:36:39.863357  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 18:36:39.884199  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 18:36:39.904901  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935.pem --> /usr/share/ca-certificates/336935.pem (1338 bytes)
	I0626 18:36:39.925334  375694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem --> /usr/share/ca-certificates/3369352.pem (1708 bytes)
	I0626 18:36:39.946115  375694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 18:36:39.961241  375694 ssh_runner.go:195] Run: openssl version
	I0626 18:36:39.966344  375694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 18:36:39.974893  375694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:36:39.977993  375694 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:36:39.978045  375694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:36:39.984217  375694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 18:36:39.992668  375694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/336935.pem && ln -fs /usr/share/ca-certificates/336935.pem /etc/ssl/certs/336935.pem"
	I0626 18:36:40.001009  375694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/336935.pem
	I0626 18:36:40.004058  375694 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 18:32 /usr/share/ca-certificates/336935.pem
	I0626 18:36:40.004101  375694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/336935.pem
	I0626 18:36:40.010200  375694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/336935.pem /etc/ssl/certs/51391683.0"
	I0626 18:36:40.018317  375694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3369352.pem && ln -fs /usr/share/ca-certificates/3369352.pem /etc/ssl/certs/3369352.pem"
	I0626 18:36:40.026640  375694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3369352.pem
	I0626 18:36:40.029747  375694 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 18:32 /usr/share/ca-certificates/3369352.pem
	I0626 18:36:40.029797  375694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3369352.pem
	I0626 18:36:40.035988  375694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3369352.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 18:36:40.044170  375694 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 18:36:40.047127  375694 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 18:36:40.047189  375694 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-022189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022189 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:36:40.047276  375694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 18:36:40.047321  375694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 18:36:40.079645  375694 cri.go:89] found id: ""
	I0626 18:36:40.079725  375694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 18:36:40.087830  375694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 18:36:40.095963  375694 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0626 18:36:40.096045  375694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 18:36:40.103816  375694 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 18:36:40.103874  375694 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0626 18:36:40.146059  375694 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0626 18:36:40.146133  375694 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 18:36:40.183850  375694 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0626 18:36:40.183944  375694 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1036-gcp
	I0626 18:36:40.184005  375694 kubeadm.go:322] OS: Linux
	I0626 18:36:40.184074  375694 kubeadm.go:322] CGROUPS_CPU: enabled
	I0626 18:36:40.184158  375694 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0626 18:36:40.184305  375694 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0626 18:36:40.184381  375694 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0626 18:36:40.184441  375694 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0626 18:36:40.184511  375694 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0626 18:36:40.249146  375694 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 18:36:40.249308  375694 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 18:36:40.249482  375694 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 18:36:40.429124  375694 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 18:36:40.429955  375694 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 18:36:40.430007  375694 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 18:36:40.499069  375694 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 18:36:40.503037  375694 out.go:204]   - Generating certificates and keys ...
	I0626 18:36:40.503173  375694 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 18:36:40.503260  375694 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 18:36:40.596854  375694 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0626 18:36:40.783153  375694 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0626 18:36:40.943281  375694 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0626 18:36:41.158306  375694 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0626 18:36:41.443145  375694 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0626 18:36:41.443371  375694 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-022189 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0626 18:36:41.639489  375694 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0626 18:36:41.639634  375694 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-022189 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0626 18:36:41.964517  375694 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0626 18:36:42.572075  375694 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0626 18:36:42.803200  375694 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0626 18:36:42.803290  375694 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 18:36:43.124906  375694 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 18:36:43.195247  375694 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 18:36:43.319038  375694 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 18:36:43.406409  375694 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 18:36:43.407028  375694 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 18:36:43.409233  375694 out.go:204]   - Booting up control plane ...
	I0626 18:36:43.409360  375694 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 18:36:43.413380  375694 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 18:36:43.414413  375694 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 18:36:43.415119  375694 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 18:36:43.417415  375694 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 18:36:50.419852  375694 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002335 seconds
	I0626 18:36:50.419953  375694 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 18:36:50.431509  375694 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 18:36:50.946596  375694 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 18:36:50.946796  375694 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-022189 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0626 18:36:51.452975  375694 kubeadm.go:322] [bootstrap-token] Using token: kydw42.lj5d4cj1tzpx2bg8
	I0626 18:36:51.454333  375694 out.go:204]   - Configuring RBAC rules ...
	I0626 18:36:51.454503  375694 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 18:36:51.457242  375694 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 18:36:51.462931  375694 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 18:36:51.464617  375694 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 18:36:51.466411  375694 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 18:36:51.468015  375694 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 18:36:51.474433  375694 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 18:36:51.680106  375694 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 18:36:51.870060  375694 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 18:36:51.871367  375694 kubeadm.go:322] 
	I0626 18:36:51.871444  375694 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 18:36:51.871453  375694 kubeadm.go:322] 
	I0626 18:36:51.871539  375694 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 18:36:51.871551  375694 kubeadm.go:322] 
	I0626 18:36:51.871572  375694 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 18:36:51.871620  375694 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 18:36:51.871663  375694 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 18:36:51.871671  375694 kubeadm.go:322] 
	I0626 18:36:51.871712  375694 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 18:36:51.871771  375694 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 18:36:51.871827  375694 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 18:36:51.871833  375694 kubeadm.go:322] 
	I0626 18:36:51.871899  375694 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 18:36:51.871962  375694 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 18:36:51.871977  375694 kubeadm.go:322] 
	I0626 18:36:51.872045  375694 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kydw42.lj5d4cj1tzpx2bg8 \
	I0626 18:36:51.872132  375694 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:de006eb5b127e50d4fc17a3a52624114d6dd8c90abcc2a4dd7bcc578abe0baac \
	I0626 18:36:51.872153  375694 kubeadm.go:322]     --control-plane 
	I0626 18:36:51.872159  375694 kubeadm.go:322] 
	I0626 18:36:51.872226  375694 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 18:36:51.872232  375694 kubeadm.go:322] 
	I0626 18:36:51.872542  375694 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kydw42.lj5d4cj1tzpx2bg8 \
	I0626 18:36:51.872669  375694 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:de006eb5b127e50d4fc17a3a52624114d6dd8c90abcc2a4dd7bcc578abe0baac 
	I0626 18:36:51.874093  375694 kubeadm.go:322] W0626 18:36:40.145550    1387 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0626 18:36:51.874338  375694 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-gcp\n", err: exit status 1
	I0626 18:36:51.874467  375694 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 18:36:51.874598  375694 kubeadm.go:322] W0626 18:36:43.413104    1387 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0626 18:36:51.874726  375694 kubeadm.go:322] W0626 18:36:43.414205    1387 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0626 18:36:51.874753  375694 cni.go:84] Creating CNI manager for ""
	I0626 18:36:51.874766  375694 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0626 18:36:51.877364  375694 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0626 18:36:51.878789  375694 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0626 18:36:51.882617  375694 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0626 18:36:51.882640  375694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0626 18:36:51.899159  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0626 18:36:52.331440  375694 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 18:36:52.331485  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:52.331487  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=ingress-addon-legacy-022189 minikube.k8s.io/updated_at=2023_06_26T18_36_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:52.431019  375694 ops.go:34] apiserver oom_adj: -16
	I0626 18:36:52.431057  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:53.007027  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:53.507181  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:54.006488  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:54.506634  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:55.006824  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:55.506530  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:56.007218  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:56.506967  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:57.006455  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:57.507427  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:58.006984  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:58.506504  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:59.007347  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:36:59.507074  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:00.007334  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:00.506562  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:01.007205  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:01.506478  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:02.006626  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:02.506727  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:03.006941  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:03.507213  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:04.006644  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:04.506610  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:05.007035  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:05.506793  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:06.006448  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:06.506718  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:07.007044  375694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:37:07.072163  375694 kubeadm.go:1081] duration metric: took 14.740736855s to wait for elevateKubeSystemPrivileges.
	I0626 18:37:07.072198  375694 kubeadm.go:406] StartCluster complete in 27.025020134s
	I0626 18:37:07.072222  375694 settings.go:142] acquiring lock: {Name:mkb5ecb1b3f16a0c9ac49740714c898cb701a346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:37:07.072304  375694 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:37:07.073144  375694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/kubeconfig: {Name:mk4c2529327c78ca1f9c9f9cbf169818d7b9a7d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:37:07.073384  375694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 18:37:07.073460  375694 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 18:37:07.073552  375694 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-022189"
	I0626 18:37:07.073565  375694 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-022189"
	I0626 18:37:07.073571  375694 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-022189"
	I0626 18:37:07.073596  375694 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-022189"
	I0626 18:37:07.073607  375694 config.go:182] Loaded profile config "ingress-addon-legacy-022189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0626 18:37:07.073660  375694 host.go:66] Checking if "ingress-addon-legacy-022189" exists ...
	I0626 18:37:07.073982  375694 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022189 --format={{.State.Status}}
	I0626 18:37:07.074251  375694 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022189 --format={{.State.Status}}
	I0626 18:37:07.074272  375694 kapi.go:59] client config for ingress-addon-legacy-022189: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.key", CAFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 18:37:07.075074  375694 cert_rotation.go:137] Starting client certificate rotation controller
	I0626 18:37:07.094426  375694 kapi.go:59] client config for ingress-addon-legacy-022189: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.key", CAFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 18:37:07.096252  375694 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 18:37:07.097188  375694 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-022189"
	I0626 18:37:07.097799  375694 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 18:37:07.097815  375694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 18:37:07.097844  375694 host.go:66] Checking if "ingress-addon-legacy-022189" exists ...
	I0626 18:37:07.097877  375694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022189
	I0626 18:37:07.098354  375694 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022189 --format={{.State.Status}}
	I0626 18:37:07.117073  375694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/ingress-addon-legacy-022189/id_rsa Username:docker}
	I0626 18:37:07.117262  375694 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 18:37:07.117284  375694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 18:37:07.117339  375694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022189
	I0626 18:37:07.138424  375694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/ingress-addon-legacy-022189/id_rsa Username:docker}
	I0626 18:37:07.157135  375694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 18:37:07.224903  375694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 18:37:07.304797  375694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 18:37:07.512725  375694 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0626 18:37:07.601515  375694 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-022189" context rescaled to 1 replicas
	I0626 18:37:07.601576  375694 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 18:37:07.609292  375694 out.go:177] * Verifying Kubernetes components...
	I0626 18:37:07.611342  375694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 18:37:07.895584  375694 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0626 18:37:07.894484  375694 kapi.go:59] client config for ingress-addon-legacy-022189: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.key", CAFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 18:37:07.897124  375694 addons.go:499] enable addons completed in 823.656861ms: enabled=[storage-provisioner default-storageclass]
	I0626 18:37:07.897406  375694 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-022189" to be "Ready" ...
	I0626 18:37:09.904450  375694 node_ready.go:58] node "ingress-addon-legacy-022189" has status "Ready":"False"
	I0626 18:37:12.404193  375694 node_ready.go:58] node "ingress-addon-legacy-022189" has status "Ready":"False"
	I0626 18:37:14.905027  375694 node_ready.go:58] node "ingress-addon-legacy-022189" has status "Ready":"False"
	I0626 18:37:17.403611  375694 node_ready.go:58] node "ingress-addon-legacy-022189" has status "Ready":"False"
	I0626 18:37:19.405290  375694 node_ready.go:58] node "ingress-addon-legacy-022189" has status "Ready":"False"
	I0626 18:37:21.904743  375694 node_ready.go:58] node "ingress-addon-legacy-022189" has status "Ready":"False"
	I0626 18:37:22.403837  375694 node_ready.go:49] node "ingress-addon-legacy-022189" has status "Ready":"True"
	I0626 18:37:22.403863  375694 node_ready.go:38] duration metric: took 14.506419507s waiting for node "ingress-addon-legacy-022189" to be "Ready" ...
	I0626 18:37:22.403872  375694 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 18:37:22.410362  375694 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-2b7cv" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:24.416558  375694 pod_ready.go:102] pod "coredns-66bff467f8-2b7cv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 18:37:07 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 18:37:26.916121  375694 pod_ready.go:102] pod "coredns-66bff467f8-2b7cv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 18:37:07 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 18:37:29.415382  375694 pod_ready.go:102] pod "coredns-66bff467f8-2b7cv" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-06-26 18:37:07 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0626 18:37:31.419474  375694 pod_ready.go:102] pod "coredns-66bff467f8-2b7cv" in "kube-system" namespace has status "Ready":"False"
	I0626 18:37:33.918846  375694 pod_ready.go:102] pod "coredns-66bff467f8-2b7cv" in "kube-system" namespace has status "Ready":"False"
	I0626 18:37:36.418291  375694 pod_ready.go:102] pod "coredns-66bff467f8-2b7cv" in "kube-system" namespace has status "Ready":"False"
	I0626 18:37:38.419044  375694 pod_ready.go:102] pod "coredns-66bff467f8-2b7cv" in "kube-system" namespace has status "Ready":"False"
	I0626 18:37:40.918062  375694 pod_ready.go:92] pod "coredns-66bff467f8-2b7cv" in "kube-system" namespace has status "Ready":"True"
	I0626 18:37:40.918092  375694 pod_ready.go:81] duration metric: took 18.507700059s waiting for pod "coredns-66bff467f8-2b7cv" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:40.918112  375694 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-022189" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:40.922244  375694 pod_ready.go:92] pod "etcd-ingress-addon-legacy-022189" in "kube-system" namespace has status "Ready":"True"
	I0626 18:37:40.922263  375694 pod_ready.go:81] duration metric: took 4.1432ms waiting for pod "etcd-ingress-addon-legacy-022189" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:40.922273  375694 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-022189" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:40.928280  375694 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-022189" in "kube-system" namespace has status "Ready":"True"
	I0626 18:37:40.928309  375694 pod_ready.go:81] duration metric: took 6.019666ms waiting for pod "kube-apiserver-ingress-addon-legacy-022189" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:40.928318  375694 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-022189" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:40.932054  375694 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-022189" in "kube-system" namespace has status "Ready":"True"
	I0626 18:37:40.932073  375694 pod_ready.go:81] duration metric: took 3.749566ms waiting for pod "kube-controller-manager-ingress-addon-legacy-022189" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:40.932082  375694 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xb7jk" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:40.935756  375694 pod_ready.go:92] pod "kube-proxy-xb7jk" in "kube-system" namespace has status "Ready":"True"
	I0626 18:37:40.935773  375694 pod_ready.go:81] duration metric: took 3.685879ms waiting for pod "kube-proxy-xb7jk" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:40.935781  375694 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-022189" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:41.113145  375694 request.go:628] Waited for 177.287412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-022189
	I0626 18:37:41.314072  375694 request.go:628] Waited for 198.296832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-022189
	I0626 18:37:41.316816  375694 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-022189" in "kube-system" namespace has status "Ready":"True"
	I0626 18:37:41.316837  375694 pod_ready.go:81] duration metric: took 381.050591ms waiting for pod "kube-scheduler-ingress-addon-legacy-022189" in "kube-system" namespace to be "Ready" ...
	I0626 18:37:41.316851  375694 pod_ready.go:38] duration metric: took 18.912964476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 18:37:41.316897  375694 api_server.go:52] waiting for apiserver process to appear ...
	I0626 18:37:41.316973  375694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 18:37:41.327821  375694 api_server.go:72] duration metric: took 33.726208454s to wait for apiserver process to appear ...
	I0626 18:37:41.327846  375694 api_server.go:88] waiting for apiserver healthz status ...
	I0626 18:37:41.327867  375694 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0626 18:37:41.332763  375694 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0626 18:37:41.333595  375694 api_server.go:141] control plane version: v1.18.20
	I0626 18:37:41.333619  375694 api_server.go:131] duration metric: took 5.766108ms to wait for apiserver health ...
	I0626 18:37:41.333627  375694 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 18:37:41.514080  375694 request.go:628] Waited for 180.364532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0626 18:37:41.518752  375694 system_pods.go:59] 8 kube-system pods found
	I0626 18:37:41.518785  375694 system_pods.go:61] "coredns-66bff467f8-2b7cv" [ca48d906-a2ef-4188-82d5-9f22ae83612b] Running
	I0626 18:37:41.518792  375694 system_pods.go:61] "etcd-ingress-addon-legacy-022189" [34339509-db01-4771-a306-8b89cc3dfdb9] Running
	I0626 18:37:41.518796  375694 system_pods.go:61] "kindnet-rj575" [4fa97662-47f8-439f-bbff-4504290771a0] Running
	I0626 18:37:41.518800  375694 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-022189" [83ac8d06-f139-459b-956a-314279d10197] Running
	I0626 18:37:41.518804  375694 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-022189" [07cd2264-2fff-4d3d-a85a-1d59c75c6036] Running
	I0626 18:37:41.518809  375694 system_pods.go:61] "kube-proxy-xb7jk" [ad463ac1-c060-42ee-906e-2a0579415f4b] Running
	I0626 18:37:41.518821  375694 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-022189" [ce4047c5-1adb-4a4d-9fd6-11deb3ef4b75] Running
	I0626 18:37:41.518833  375694 system_pods.go:61] "storage-provisioner" [3aa1e0a0-927d-4913-a9f1-a1159d2afb95] Running
	I0626 18:37:41.518838  375694 system_pods.go:74] duration metric: took 185.206147ms to wait for pod list to return data ...
	I0626 18:37:41.518845  375694 default_sa.go:34] waiting for default service account to be created ...
	I0626 18:37:41.713219  375694 request.go:628] Waited for 194.293519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0626 18:37:41.716035  375694 default_sa.go:45] found service account: "default"
	I0626 18:37:41.716062  375694 default_sa.go:55] duration metric: took 197.210245ms for default service account to be created ...
	I0626 18:37:41.716075  375694 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 18:37:41.913520  375694 request.go:628] Waited for 197.366576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0626 18:37:41.919639  375694 system_pods.go:86] 8 kube-system pods found
	I0626 18:37:41.919672  375694 system_pods.go:89] "coredns-66bff467f8-2b7cv" [ca48d906-a2ef-4188-82d5-9f22ae83612b] Running
	I0626 18:37:41.919684  375694 system_pods.go:89] "etcd-ingress-addon-legacy-022189" [34339509-db01-4771-a306-8b89cc3dfdb9] Running
	I0626 18:37:41.919691  375694 system_pods.go:89] "kindnet-rj575" [4fa97662-47f8-439f-bbff-4504290771a0] Running
	I0626 18:37:41.919696  375694 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-022189" [83ac8d06-f139-459b-956a-314279d10197] Running
	I0626 18:37:41.919701  375694 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-022189" [07cd2264-2fff-4d3d-a85a-1d59c75c6036] Running
	I0626 18:37:41.919707  375694 system_pods.go:89] "kube-proxy-xb7jk" [ad463ac1-c060-42ee-906e-2a0579415f4b] Running
	I0626 18:37:41.919713  375694 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-022189" [ce4047c5-1adb-4a4d-9fd6-11deb3ef4b75] Running
	I0626 18:37:41.919719  375694 system_pods.go:89] "storage-provisioner" [3aa1e0a0-927d-4913-a9f1-a1159d2afb95] Running
	I0626 18:37:41.919729  375694 system_pods.go:126] duration metric: took 203.647995ms to wait for k8s-apps to be running ...
	I0626 18:37:41.919748  375694 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 18:37:41.919804  375694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 18:37:41.931041  375694 system_svc.go:56] duration metric: took 11.280663ms WaitForService to wait for kubelet.
	I0626 18:37:41.931093  375694 kubeadm.go:581] duration metric: took 34.329482101s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 18:37:41.931118  375694 node_conditions.go:102] verifying NodePressure condition ...
	I0626 18:37:42.113539  375694 request.go:628] Waited for 182.3457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0626 18:37:42.116338  375694 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0626 18:37:42.116368  375694 node_conditions.go:123] node cpu capacity is 8
	I0626 18:37:42.116381  375694 node_conditions.go:105] duration metric: took 185.259289ms to run NodePressure ...
	I0626 18:37:42.116393  375694 start.go:228] waiting for startup goroutines ...
	I0626 18:37:42.116399  375694 start.go:233] waiting for cluster config update ...
	I0626 18:37:42.116408  375694 start.go:242] writing updated cluster config ...
	I0626 18:37:42.116677  375694 ssh_runner.go:195] Run: rm -f paused
	I0626 18:37:42.165846  375694 start.go:652] kubectl: 1.27.3, cluster: 1.18.20 (minor skew: 9)
	I0626 18:37:42.168282  375694 out.go:177] 
	W0626 18:37:42.170001  375694 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.18.20.
	I0626 18:37:42.171665  375694 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0626 18:37:42.173403  375694 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-022189" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jun 26 18:40:49 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:49.784466631Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-mbkcf from CNI network \"kindnet\" (type=ptp)"
	Jun 26 18:40:49 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:49.814312631Z" level=info msg="Stopped pod sandbox: c4b09addc1a9ef5d63488dce0e4fa8701fcfd8dba817a0b3e6c2a0985cff7e90" id=c53d937b-2df2-4ded-9ab3-3186c3d7eb23 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 26 18:40:49 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:49.814449831Z" level=info msg="Stopped pod sandbox (already stopped): c4b09addc1a9ef5d63488dce0e4fa8701fcfd8dba817a0b3e6c2a0985cff7e90" id=8198dea4-9f60-48f3-8bf0-ca69c98ed80c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.004771776Z" level=info msg="Removing container: e135da829dbf1ac4e46476393e8916064f90198815707dee15428364eb51521a" id=66011ae0-f633-4af8-836f-0c64eefae879 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.020109926Z" level=info msg="Removed container e135da829dbf1ac4e46476393e8916064f90198815707dee15428364eb51521a: ingress-nginx/ingress-nginx-admission-patch-9psrm/patch" id=66011ae0-f633-4af8-836f-0c64eefae879 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.021180292Z" level=info msg="Removing container: 7b435e450c2a67a9a1c09c356cb3d7c417bfd6d2eee046d59463df1096f5f2b4" id=119f03d8-01cf-4fc5-859f-a67204befaa0 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.035980828Z" level=info msg="Removed container 7b435e450c2a67a9a1c09c356cb3d7c417bfd6d2eee046d59463df1096f5f2b4: ingress-nginx/ingress-nginx-controller-7fcf777cb7-mbkcf/controller" id=119f03d8-01cf-4fc5-859f-a67204befaa0 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.037163754Z" level=info msg="Removing container: 0d34c209ee1d6cacad4edc76f8d9cf876e1757ac1de8bec4310a4f6bb51d10e4" id=9516c47b-26d1-44e0-993e-edaed5afc151 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.078510264Z" level=info msg="Removed container 0d34c209ee1d6cacad4edc76f8d9cf876e1757ac1de8bec4310a4f6bb51d10e4: ingress-nginx/ingress-nginx-admission-create-d4wdm/create" id=9516c47b-26d1-44e0-993e-edaed5afc151 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.079763257Z" level=info msg="Stopping pod sandbox: 35bf8dc9764ad999036101069f9785b691946349b71cfce6f0bf1a7dc2acc810" id=e3b66f3f-3246-4875-a5a3-0e9beb14e289 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.079805564Z" level=info msg="Stopped pod sandbox (already stopped): 35bf8dc9764ad999036101069f9785b691946349b71cfce6f0bf1a7dc2acc810" id=e3b66f3f-3246-4875-a5a3-0e9beb14e289 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.080102212Z" level=info msg="Removing pod sandbox: 35bf8dc9764ad999036101069f9785b691946349b71cfce6f0bf1a7dc2acc810" id=c58314e6-7e37-4da8-90d4-eaf4e1f50570 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.084987157Z" level=info msg="Removed pod sandbox: 35bf8dc9764ad999036101069f9785b691946349b71cfce6f0bf1a7dc2acc810" id=c58314e6-7e37-4da8-90d4-eaf4e1f50570 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.085434117Z" level=info msg="Stopping pod sandbox: 1b26d24700e9ea3120fbe5e84b6385a5415b48ec16db1826cad4e1b891648e15" id=6c953a7e-86bd-4846-9f34-327394d263d7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.085471669Z" level=info msg="Stopped pod sandbox (already stopped): 1b26d24700e9ea3120fbe5e84b6385a5415b48ec16db1826cad4e1b891648e15" id=6c953a7e-86bd-4846-9f34-327394d263d7 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.085733252Z" level=info msg="Removing pod sandbox: 1b26d24700e9ea3120fbe5e84b6385a5415b48ec16db1826cad4e1b891648e15" id=c5fe50cc-d604-4951-bfe5-802ec316bd84 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.090273824Z" level=info msg="Removed pod sandbox: 1b26d24700e9ea3120fbe5e84b6385a5415b48ec16db1826cad4e1b891648e15" id=c5fe50cc-d604-4951-bfe5-802ec316bd84 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.090621008Z" level=info msg="Stopping pod sandbox: c4b09addc1a9ef5d63488dce0e4fa8701fcfd8dba817a0b3e6c2a0985cff7e90" id=46d13506-a827-49b7-bfd9-435d428e1d26 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.090662570Z" level=info msg="Stopped pod sandbox (already stopped): c4b09addc1a9ef5d63488dce0e4fa8701fcfd8dba817a0b3e6c2a0985cff7e90" id=46d13506-a827-49b7-bfd9-435d428e1d26 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.090904024Z" level=info msg="Removing pod sandbox: c4b09addc1a9ef5d63488dce0e4fa8701fcfd8dba817a0b3e6c2a0985cff7e90" id=f05158ad-bb09-4100-a233-19090cf917cd name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.096982899Z" level=info msg="Removed pod sandbox: c4b09addc1a9ef5d63488dce0e4fa8701fcfd8dba817a0b3e6c2a0985cff7e90" id=f05158ad-bb09-4100-a233-19090cf917cd name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.097347380Z" level=info msg="Stopping pod sandbox: bbcf559ee92b52caa1c81e956de691e9d08ae06a44435ae936634e59671f4e31" id=4724c56f-02d6-4211-8072-990241f9b29b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.097384616Z" level=info msg="Stopped pod sandbox (already stopped): bbcf559ee92b52caa1c81e956de691e9d08ae06a44435ae936634e59671f4e31" id=4724c56f-02d6-4211-8072-990241f9b29b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.097733925Z" level=info msg="Removing pod sandbox: bbcf559ee92b52caa1c81e956de691e9d08ae06a44435ae936634e59671f4e31" id=755d368b-9121-40f0-a63c-def93f8771d7 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	Jun 26 18:40:52 ingress-addon-legacy-022189 crio[954]: time="2023-06-26 18:40:52.102649997Z" level=info msg="Removed pod sandbox: bbcf559ee92b52caa1c81e956de691e9d08ae06a44435ae936634e59671f4e31" id=755d368b-9121-40f0-a63c-def93f8771d7 name=/runtime.v1alpha2.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                     CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	81c7540e5186f       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea   21 seconds ago      Running             hello-world-app           0                   45a4612eee57b       hello-world-app-5f5d8b66bb-rjv87
	ae1e007fd6773       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6           2 minutes ago       Running             nginx                     0                   39641a40ca426       nginx
	9d827eb330a66       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                          3 minutes ago       Running             coredns                   0                   0b4ed7ca62834       coredns-66bff467f8-2b7cv
	15d5f4afe9c04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                          3 minutes ago       Running             storage-provisioner       0                   de0d6cb7996c1       storage-provisioner
	a8898e522e044       docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974        3 minutes ago       Running             kindnet-cni               0                   8ce668ed8a14d       kindnet-rj575
	9866a87fd345b       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                          3 minutes ago       Running             kube-proxy                0                   5f3da3f5fa494       kube-proxy-xb7jk
	135ce93c3f134       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                          4 minutes ago       Running             kube-scheduler            0                   3b0d3e26fac79       kube-scheduler-ingress-addon-legacy-022189
	0a080e84c5e23       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                          4 minutes ago       Running             etcd                      0                   a7a0daf5f6532       etcd-ingress-addon-legacy-022189
	100c2123e426d       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                          4 minutes ago       Running             kube-controller-manager   0                   ab9c5ee16b156       kube-controller-manager-ingress-addon-legacy-022189
	4ccddb0c75a59       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                          4 minutes ago       Running             kube-apiserver            0                   16ada7755308b       kube-apiserver-ingress-addon-legacy-022189
	
	* 
	* ==> coredns [9d827eb330a66488610e28ccdce76e699add844fc36a900c3d4cc0944a05e35c] <==
	* [INFO] 10.244.0.5:42147 - 27941 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00280845s
	[INFO] 10.244.0.5:54840 - 16634 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003234278s
	[INFO] 10.244.0.5:42147 - 24194 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003039093s
	[INFO] 10.244.0.5:53266 - 7238 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003383346s
	[INFO] 10.244.0.5:40851 - 61455 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003285072s
	[INFO] 10.244.0.5:60378 - 39182 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003251604s
	[INFO] 10.244.0.5:53263 - 60890 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00332224s
	[INFO] 10.244.0.5:51215 - 18266 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003337616s
	[INFO] 10.244.0.5:52182 - 31105 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003149068s
	[INFO] 10.244.0.5:42147 - 3607 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003992004s
	[INFO] 10.244.0.5:53266 - 25562 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004101373s
	[INFO] 10.244.0.5:42147 - 51726 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050033s
	[INFO] 10.244.0.5:53263 - 41272 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.003888196s
	[INFO] 10.244.0.5:40851 - 41263 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004055839s
	[INFO] 10.244.0.5:53266 - 24926 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000082738s
	[INFO] 10.244.0.5:60378 - 37997 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004253147s
	[INFO] 10.244.0.5:54840 - 957 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00453796s
	[INFO] 10.244.0.5:40851 - 7754 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064176s
	[INFO] 10.244.0.5:60378 - 60002 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064887s
	[INFO] 10.244.0.5:53263 - 43755 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000166458s
	[INFO] 10.244.0.5:52182 - 28505 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004455055s
	[INFO] 10.244.0.5:54840 - 15297 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000107642s
	[INFO] 10.244.0.5:51215 - 15731 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004317889s
	[INFO] 10.244.0.5:52182 - 5699 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069971s
	[INFO] 10.244.0.5:51215 - 44963 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059746s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-022189
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-022189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=ingress-addon-legacy-022189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T18_36_52_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 18:36:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-022189
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 18:40:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 18:40:52 +0000   Mon, 26 Jun 2023 18:36:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 18:40:52 +0000   Mon, 26 Jun 2023 18:36:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 18:40:52 +0000   Mon, 26 Jun 2023 18:36:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 18:40:52 +0000   Mon, 26 Jun 2023 18:37:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-022189
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 61e3d4d3079d48ed8730769a0bb03078
	  System UUID:                58e0c0b5-3137-45a3-86b6-4ec0945db9da
	  Boot ID:                    4f86402f-f9e2-4c4c-a5d0-b2ea258e243c
	  Kernel Version:             5.15.0-1036-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-rjv87                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-2b7cv                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m48s
	  kube-system                 etcd-ingress-addon-legacy-022189                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kindnet-rj575                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m48s
	  kube-system                 kube-apiserver-ingress-addon-legacy-022189             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-022189    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-proxy-xb7jk                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-scheduler-ingress-addon-legacy-022189             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m11s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s (x5 over 4m11s)  kubelet     Node ingress-addon-legacy-022189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x5 over 4m11s)  kubelet     Node ingress-addon-legacy-022189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x4 over 4m11s)  kubelet     Node ingress-addon-legacy-022189 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m4s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m3s                   kubelet     Node ingress-addon-legacy-022189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s                   kubelet     Node ingress-addon-legacy-022189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s                   kubelet     Node ingress-addon-legacy-022189 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m47s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m33s                  kubelet     Node ingress-addon-legacy-022189 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007343] FS-Cache: O-key=[8] '99a20f0200000000'
	[  +0.004944] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007952] FS-Cache: N-cookie d=00000000fb8647a3{9p.inode} n=00000000bc38a259
	[  +0.008720] FS-Cache: N-key=[8] '99a20f0200000000'
	[  +2.886438] FS-Cache: Duplicate cookie detected
	[  +0.004754] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006742] FS-Cache: O-cookie d=00000000d65110e0{9P.session} n=000000005f999d16
	[  +0.007526] FS-Cache: O-key=[10] '34323936303633383537'
	[  +0.005375] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.007946] FS-Cache: N-cookie d=00000000d65110e0{9P.session} n=0000000046707e50
	[  +0.008925] FS-Cache: N-key=[10] '34323936303633383537'
	[Jun26 18:38] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[  +0.999976] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[  +2.015766] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[  +4.063643] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[  +8.191166] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[ +16.126397] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[Jun26 18:39] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	
	* 
	* ==> etcd [0a080e84c5e23ac970d12a1471c6cc8bb5aac7b814dfcbe3d8e01962e8e31edd] <==
	* raft2023/06/26 18:36:45 INFO: aec36adc501070cc became follower at term 1
	raft2023/06/26 18:36:45 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-06-26 18:36:45.119290 W | auth: simple token is not cryptographically signed
	2023-06-26 18:36:45.122315 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-06-26 18:36:45.122615 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/06/26 18:36:45 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-06-26 18:36:45.123225 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-06-26 18:36:45.124568 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-06-26 18:36:45.124615 I | embed: listening for peers on 192.168.49.2:2380
	2023-06-26 18:36:45.124744 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/06/26 18:36:45 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/06/26 18:36:45 INFO: aec36adc501070cc became candidate at term 2
	raft2023/06/26 18:36:45 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/06/26 18:36:45 INFO: aec36adc501070cc became leader at term 2
	raft2023/06/26 18:36:45 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-06-26 18:36:45.216230 I | etcdserver: setting up the initial cluster version to 3.4
	2023-06-26 18:36:45.217086 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-06-26 18:36:45.217148 I | etcdserver/api: enabled capabilities for version 3.4
	2023-06-26 18:36:45.217211 I | etcdserver: published {Name:ingress-addon-legacy-022189 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-06-26 18:36:45.217327 I | embed: ready to serve client requests
	2023-06-26 18:36:45.218912 I | embed: serving client requests on 192.168.49.2:2379
	2023-06-26 18:36:45.218970 I | embed: ready to serve client requests
	2023-06-26 18:36:45.222030 I | embed: serving client requests on 127.0.0.1:2379
	2023-06-26 18:37:12.232467 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-ingress-addon-legacy-022189\" " with result "range_response_count:1 size:4788" took too long (150.123456ms) to execute
	2023-06-26 18:37:12.400319 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-ingress-addon-legacy-022189\" " with result "range_response_count:1 size:6682" took too long (159.073806ms) to execute
	
	* 
	* ==> kernel <==
	*  18:40:55 up  1:23,  0 users,  load average: 0.10, 0.52, 1.16
	Linux ingress-addon-legacy-022189 5.15.0-1036-gcp #44~20.04.1-Ubuntu SMP Fri Jun 9 10:48:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [a8898e522e044a60d23d3f34ca790984870a2fe3a342a6db220adc82c805e5b4] <==
	* I0626 18:38:53.375398       1 main.go:227] handling current node
	I0626 18:39:03.378827       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:39:03.378852       1 main.go:227] handling current node
	I0626 18:39:13.391320       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:39:13.391353       1 main.go:227] handling current node
	I0626 18:39:23.395036       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:39:23.395061       1 main.go:227] handling current node
	I0626 18:39:33.404080       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:39:33.404106       1 main.go:227] handling current node
	I0626 18:39:43.416344       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:39:43.416533       1 main.go:227] handling current node
	I0626 18:39:53.419670       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:39:53.419694       1 main.go:227] handling current node
	I0626 18:40:03.423434       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:40:03.423457       1 main.go:227] handling current node
	I0626 18:40:13.427030       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:40:13.427054       1 main.go:227] handling current node
	I0626 18:40:23.431380       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:40:23.431405       1 main.go:227] handling current node
	I0626 18:40:33.440172       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:40:33.440202       1 main.go:227] handling current node
	I0626 18:40:43.444194       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:40:43.444221       1 main.go:227] handling current node
	I0626 18:40:53.455841       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0626 18:40:53.455873       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [4ccddb0c75a59ba54edd152d5401665cf43ebd759b56d04684ad2d6b597981eb] <==
	* I0626 18:36:48.928104       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0626 18:36:49.012177       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0626 18:36:49.012183       1 cache.go:39] Caches are synced for autoregister controller
	I0626 18:36:49.014364       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0626 18:36:49.014419       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0626 18:36:49.092569       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0626 18:36:49.909459       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0626 18:36:49.909489       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0626 18:36:49.914246       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0626 18:36:49.917068       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0626 18:36:49.917088       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0626 18:36:50.194838       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0626 18:36:50.224560       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0626 18:36:50.334820       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0626 18:36:50.335731       1 controller.go:609] quota admission added evaluator for: endpoints
	I0626 18:36:50.338886       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0626 18:36:51.269695       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0626 18:36:51.669231       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0626 18:36:51.861311       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0626 18:36:52.009372       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0626 18:37:07.398952       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0626 18:37:07.503277       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0626 18:37:42.692568       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0626 18:38:10.072177       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0626 18:40:47.616007       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [100c2123e426de0117f4c18d68ecc50cb85f7c75a16ec95aed15abbc5bb6fad1] <==
	* I0626 18:37:07.407153       1 taint_manager.go:187] Starting NoExecuteTaintManager
	W0626 18:37:07.407165       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-022189. Assuming now as a timestamp.
	I0626 18:37:07.407207       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0626 18:37:07.407509       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-022189", UID:"fd48a2eb-7cf0-47d2-9673-9b5e7dfe276b", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-022189 event: Registered Node ingress-addon-legacy-022189 in Controller
	I0626 18:37:07.408236       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0626 18:37:07.408262       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0626 18:37:07.411174       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0626 18:37:07.412029       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0626 18:37:07.493184       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0626 18:37:07.497663       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0626 18:37:07.497754       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0626 18:37:07.510043       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"cb4d0eac-11b0-4f29-be64-91bec6b7fc17", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-2b7cv
	E0626 18:37:07.517392       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0626 18:37:07.598254       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"9d0877ff-8411-4d63-b215-7edc41b4d488", APIVersion:"apps/v1", ResourceVersion:"224", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-xb7jk
	I0626 18:37:07.599002       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"1dce6fef-10f0-4064-98b3-4c93fc704eb1", APIVersion:"apps/v1", ResourceVersion:"238", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-rj575
	E0626 18:37:07.708845       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0626 18:37:22.407961       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0626 18:37:42.633927       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b523d9ba-7b66-4334-aae9-cae4d98d101d", APIVersion:"apps/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0626 18:37:42.640144       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"a66232ce-8a55-4234-8a4b-5e9a55e4bc42", APIVersion:"apps/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-mbkcf
	I0626 18:37:42.700291       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5163c9db-5138-4390-816e-46694e6df5f1", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-d4wdm
	I0626 18:37:42.710819       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e91fd8db-fa50-480f-8c35-de7027096581", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-9psrm
	I0626 18:37:48.216997       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5163c9db-5138-4390-816e-46694e6df5f1", APIVersion:"batch/v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0626 18:38:04.244265       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e91fd8db-fa50-480f-8c35-de7027096581", APIVersion:"batch/v1", ResourceVersion:"506", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0626 18:40:30.641195       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"0f6d4d79-83a2-4266-8e5c-4a9dc2c3dc50", APIVersion:"apps/v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0626 18:40:30.648433       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"95bfb0bc-a953-4700-a8b6-de4d338be6c8", APIVersion:"apps/v1", ResourceVersion:"734", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-rjv87
	
	* 
	* ==> kube-proxy [9866a87fd345b1545701f4d3fa47e64d7c612facc4714f0c0ac384e4d0c190ce] <==
	* W0626 18:37:08.166069       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0626 18:37:08.172160       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0626 18:37:08.172188       1 server_others.go:186] Using iptables Proxier.
	I0626 18:37:08.172421       1 server.go:583] Version: v1.18.20
	I0626 18:37:08.172849       1 config.go:315] Starting service config controller
	I0626 18:37:08.172887       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0626 18:37:08.172912       1 config.go:133] Starting endpoints config controller
	I0626 18:37:08.172921       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0626 18:37:08.273065       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0626 18:37:08.273071       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [135ce93c3f1349b17c7148a668e2f65a74ceb09a1f32ca802783cb6863866c56] <==
	* W0626 18:36:49.013160       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0626 18:36:49.013265       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 18:36:49.013281       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0626 18:36:49.013287       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0626 18:36:49.101766       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0626 18:36:49.101803       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0626 18:36:49.106229       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0626 18:36:49.106256       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0626 18:36:49.106261       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0626 18:36:49.106346       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0626 18:36:49.113647       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 18:36:49.113890       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 18:36:49.114234       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0626 18:36:49.114426       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0626 18:36:49.114594       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0626 18:36:49.114939       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 18:36:49.114969       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 18:36:49.115049       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 18:36:49.115139       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 18:36:49.116323       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 18:36:49.116459       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 18:36:49.113646       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 18:36:49.978240       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 18:36:49.998409       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0626 18:36:50.706428       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Jun 26 18:40:34 ingress-addon-legacy-022189 kubelet[1892]: E0626 18:40:34.022833    1892 pod_workers.go:191] Error syncing pod a0408a51-52c4-4fe0-8cea-ddf9cef2922c ("kube-ingress-dns-minikube_kube-system(a0408a51-52c4-4fe0-8cea-ddf9cef2922c)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jun 26 18:40:46 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:46.228757    1892 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-cg46h" (UniqueName: "kubernetes.io/secret/a0408a51-52c4-4fe0-8cea-ddf9cef2922c-minikube-ingress-dns-token-cg46h") pod "a0408a51-52c4-4fe0-8cea-ddf9cef2922c" (UID: "a0408a51-52c4-4fe0-8cea-ddf9cef2922c")
	Jun 26 18:40:46 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:46.230796    1892 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0408a51-52c4-4fe0-8cea-ddf9cef2922c-minikube-ingress-dns-token-cg46h" (OuterVolumeSpecName: "minikube-ingress-dns-token-cg46h") pod "a0408a51-52c4-4fe0-8cea-ddf9cef2922c" (UID: "a0408a51-52c4-4fe0-8cea-ddf9cef2922c"). InnerVolumeSpecName "minikube-ingress-dns-token-cg46h". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 26 18:40:46 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:46.329065    1892 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-cg46h" (UniqueName: "kubernetes.io/secret/a0408a51-52c4-4fe0-8cea-ddf9cef2922c-minikube-ingress-dns-token-cg46h") on node "ingress-addon-legacy-022189" DevicePath ""
	Jun 26 18:40:47 ingress-addon-legacy-022189 kubelet[1892]: W0626 18:40:47.487706    1892 pod_container_deletor.go:77] Container "35bf8dc9764ad999036101069f9785b691946349b71cfce6f0bf1a7dc2acc810" not found in pod's containers
	Jun 26 18:40:47 ingress-addon-legacy-022189 kubelet[1892]: E0626 18:40:47.607295    1892 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-mbkcf.176c499107a6d88a", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-mbkcf", UID:"f0836772-727f-4773-b7d1-785803e4e065", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-022189"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc11e94cbe408c28a, ext:235965402618, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc11e94cbe408c28a, ext:235965402618, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-mbkcf.176c499107a6d88a" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 26 18:40:47 ingress-addon-legacy-022189 kubelet[1892]: E0626 18:40:47.610211    1892 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-mbkcf.176c499107a6d88a", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-mbkcf", UID:"f0836772-727f-4773-b7d1-785803e4e065", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-022189"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc11e94cbe408c28a, ext:235965402618, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc11e94cbe42e3a8c, ext:235967858172, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-mbkcf.176c499107a6d88a" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 26 18:40:50 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:50.238265    1892 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f0836772-727f-4773-b7d1-785803e4e065-webhook-cert") pod "f0836772-727f-4773-b7d1-785803e4e065" (UID: "f0836772-727f-4773-b7d1-785803e4e065")
	Jun 26 18:40:50 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:50.238333    1892 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-j9r7x" (UniqueName: "kubernetes.io/secret/f0836772-727f-4773-b7d1-785803e4e065-ingress-nginx-token-j9r7x") pod "f0836772-727f-4773-b7d1-785803e4e065" (UID: "f0836772-727f-4773-b7d1-785803e4e065")
	Jun 26 18:40:50 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:50.240452    1892 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0836772-727f-4773-b7d1-785803e4e065-ingress-nginx-token-j9r7x" (OuterVolumeSpecName: "ingress-nginx-token-j9r7x") pod "f0836772-727f-4773-b7d1-785803e4e065" (UID: "f0836772-727f-4773-b7d1-785803e4e065"). InnerVolumeSpecName "ingress-nginx-token-j9r7x". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 26 18:40:50 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:50.240543    1892 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0836772-727f-4773-b7d1-785803e4e065-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f0836772-727f-4773-b7d1-785803e4e065" (UID: "f0836772-727f-4773-b7d1-785803e4e065"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 26 18:40:50 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:50.338647    1892 reconciler.go:319] Volume detached for volume "ingress-nginx-token-j9r7x" (UniqueName: "kubernetes.io/secret/f0836772-727f-4773-b7d1-785803e4e065-ingress-nginx-token-j9r7x") on node "ingress-addon-legacy-022189" DevicePath ""
	Jun 26 18:40:50 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:50.338682    1892 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f0836772-727f-4773-b7d1-785803e4e065-webhook-cert") on node "ingress-addon-legacy-022189" DevicePath ""
	Jun 26 18:40:50 ingress-addon-legacy-022189 kubelet[1892]: W0626 18:40:50.493770    1892 pod_container_deletor.go:77] Container "c4b09addc1a9ef5d63488dce0e4fa8701fcfd8dba817a0b3e6c2a0985cff7e90" not found in pod's containers
	Jun 26 18:40:52 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:52.003588    1892 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: e135da829dbf1ac4e46476393e8916064f90198815707dee15428364eb51521a
	Jun 26 18:40:52 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:52.020325    1892 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7b435e450c2a67a9a1c09c356cb3d7c417bfd6d2eee046d59463df1096f5f2b4
	Jun 26 18:40:52 ingress-addon-legacy-022189 kubelet[1892]: I0626 18:40:52.036208    1892 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0d34c209ee1d6cacad4edc76f8d9cf876e1757ac1de8bec4310a4f6bb51d10e4
	Jun 26 18:40:52 ingress-addon-legacy-022189 kubelet[1892]: E0626 18:40:52.134512    1892 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2daa71fd541c63efd6e842443bc217f8902d502aa7687c77f1e267d809f10129/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2daa71fd541c63efd6e842443bc217f8902d502aa7687c77f1e267d809f10129/diff: no such file or directory, extraDiskErr: <nil>
	Jun 26 18:40:52 ingress-addon-legacy-022189 kubelet[1892]: E0626 18:40:52.139092    1892 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2daa71fd541c63efd6e842443bc217f8902d502aa7687c77f1e267d809f10129/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2daa71fd541c63efd6e842443bc217f8902d502aa7687c77f1e267d809f10129/diff: no such file or directory, extraDiskErr: <nil>
	Jun 26 18:40:52 ingress-addon-legacy-022189 kubelet[1892]: E0626 18:40:52.206660    1892 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6cc911476acbc3548424ae00f32de94d9ca3f73eecb50d30bc6ddd0068d386e3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6cc911476acbc3548424ae00f32de94d9ca3f73eecb50d30bc6ddd0068d386e3/diff: no such file or directory, extraDiskErr: <nil>
	Jun 26 18:40:52 ingress-addon-legacy-022189 kubelet[1892]: E0626 18:40:52.213104    1892 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/6cc911476acbc3548424ae00f32de94d9ca3f73eecb50d30bc6ddd0068d386e3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/6cc911476acbc3548424ae00f32de94d9ca3f73eecb50d30bc6ddd0068d386e3/diff: no such file or directory, extraDiskErr: <nil>
	Jun 26 18:40:52 ingress-addon-legacy-022189 kubelet[1892]: E0626 18:40:52.214013    1892 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0c8be44364ba82b23af7b9d7f7f077642fcd1cc5367e68de20cdc13a688fdfbf/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0c8be44364ba82b23af7b9d7f7f077642fcd1cc5367e68de20cdc13a688fdfbf/diff: no such file or directory, extraDiskErr: <nil>
	Jun 26 18:40:52 ingress-addon-legacy-022189 kubelet[1892]: E0626 18:40:52.305098    1892 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/20911f7a5ff15fb52079ab2606754c5997ebf6f4ba1da294d8743fde8e4911ac/diff" to get inode usage: stat /var/lib/containers/storage/overlay/20911f7a5ff15fb52079ab2606754c5997ebf6f4ba1da294d8743fde8e4911ac/diff: no such file or directory, extraDiskErr: <nil>
	Jun 26 18:40:52 ingress-addon-legacy-022189 kubelet[1892]: E0626 18:40:52.309290    1892 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/20911f7a5ff15fb52079ab2606754c5997ebf6f4ba1da294d8743fde8e4911ac/diff" to get inode usage: stat /var/lib/containers/storage/overlay/20911f7a5ff15fb52079ab2606754c5997ebf6f4ba1da294d8743fde8e4911ac/diff: no such file or directory, extraDiskErr: <nil>
	Jun 26 18:40:52 ingress-addon-legacy-022189 kubelet[1892]: E0626 18:40:52.400181    1892 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0c8be44364ba82b23af7b9d7f7f077642fcd1cc5367e68de20cdc13a688fdfbf/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0c8be44364ba82b23af7b9d7f7f077642fcd1cc5367e68de20cdc13a688fdfbf/diff: no such file or directory, extraDiskErr: <nil>
	
	* 
	* ==> storage-provisioner [15d5f4afe9c040fe838253eb8a6e92130e8b143f506b8371006e356dbbb92f8d] <==
	* I0626 18:37:27.456578       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0626 18:37:27.464382       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0626 18:37:27.464439       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0626 18:37:27.469500       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0626 18:37:27.469564       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0557370f-d52c-4b23-9720-07abe51ea8cc", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-022189_c0ad037a-3051-4ca8-86f6-94a993fa1f8b became leader
	I0626 18:37:27.469640       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-022189_c0ad037a-3051-4ca8-86f6-94a993fa1f8b!
	I0626 18:37:27.570772       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-022189_c0ad037a-3051-4ca8-86f6-94a993fa1f8b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-022189 -n ingress-addon-legacy-022189
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-022189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (179.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- get pods -o jsonpath='{.items[*].metadata.name}'
E0626 18:47:56.659548  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-c5c5w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-c5c5w -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-c5c5w -- sh -c "ping -c 1 192.168.58.1": exit status 1 (173.397724ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-c5c5w): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-cxsjd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-cxsjd -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-cxsjd -- sh -c "ping -c 1 192.168.58.1": exit status 1 (171.554544ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-cxsjd): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-306845
helpers_test.go:235: (dbg) docker inspect multinode-306845:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2",
	        "Created": "2023-06-26T18:46:10.332902099Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 422133,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-26T18:46:10.605787887Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:42a2b4e0d52aa58abe36e9abb680d93c11444dcb07814b595a45d2fa0f8a777c",
	        "ResolvConfPath": "/var/lib/docker/containers/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2/hostname",
	        "HostsPath": "/var/lib/docker/containers/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2/hosts",
	        "LogPath": "/var/lib/docker/containers/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2-json.log",
	        "Name": "/multinode-306845",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-306845:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-306845",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7b432931954aded8239195b7efaf81c698fba27a9420d67b7f9ced7b73b61184-init/diff:/var/lib/docker/overlay2/8f9a4266fd693ed66b9874436fe49dcae15615f8bcd132a5a8e8ba2403f6ef40/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b432931954aded8239195b7efaf81c698fba27a9420d67b7f9ced7b73b61184/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b432931954aded8239195b7efaf81c698fba27a9420d67b7f9ced7b73b61184/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b432931954aded8239195b7efaf81c698fba27a9420d67b7f9ced7b73b61184/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-306845",
	                "Source": "/var/lib/docker/volumes/multinode-306845/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-306845",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-306845",
	                "name.minikube.sigs.k8s.io": "multinode-306845",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1fc6e150963ec273e438537a00571600b020a32e094caaa9291810203ff315a9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33157"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1fc6e150963e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-306845": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "15943a8b3a25",
	                        "multinode-306845"
	                    ],
	                    "NetworkID": "5b706925dd02b32edd8c8f87d452eb6acb8a4416a9357e734c6f77d7667619d8",
	                    "EndpointID": "afed65dce90c982791eea24d6eaafbe7fdf5955ae20da66b05952b28732d6997",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-306845 -n multinode-306845
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-306845 logs -n 25: (1.408154858s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-638061                           | mount-start-2-638061 | jenkins | v1.30.1 | 26 Jun 23 18:45 UTC | 26 Jun 23 18:45 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-638061 ssh -- ls                    | mount-start-2-638061 | jenkins | v1.30.1 | 26 Jun 23 18:45 UTC | 26 Jun 23 18:45 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-622029                           | mount-start-1-622029 | jenkins | v1.30.1 | 26 Jun 23 18:45 UTC | 26 Jun 23 18:45 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-638061 ssh -- ls                    | mount-start-2-638061 | jenkins | v1.30.1 | 26 Jun 23 18:45 UTC | 26 Jun 23 18:45 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-638061                           | mount-start-2-638061 | jenkins | v1.30.1 | 26 Jun 23 18:45 UTC | 26 Jun 23 18:45 UTC |
	| start   | -p mount-start-2-638061                           | mount-start-2-638061 | jenkins | v1.30.1 | 26 Jun 23 18:45 UTC | 26 Jun 23 18:46 UTC |
	| ssh     | mount-start-2-638061 ssh -- ls                    | mount-start-2-638061 | jenkins | v1.30.1 | 26 Jun 23 18:46 UTC | 26 Jun 23 18:46 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-638061                           | mount-start-2-638061 | jenkins | v1.30.1 | 26 Jun 23 18:46 UTC | 26 Jun 23 18:46 UTC |
	| delete  | -p mount-start-1-622029                           | mount-start-1-622029 | jenkins | v1.30.1 | 26 Jun 23 18:46 UTC | 26 Jun 23 18:46 UTC |
	| start   | -p multinode-306845                               | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:46 UTC | 26 Jun 23 18:47 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- apply -f                   | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- rollout                    | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- get pods -o                | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- get pods -o                | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- exec                       | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | busybox-67b7f59bb-c5c5w --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- exec                       | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | busybox-67b7f59bb-cxsjd --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- exec                       | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | busybox-67b7f59bb-c5c5w --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- exec                       | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | busybox-67b7f59bb-cxsjd --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- exec                       | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | busybox-67b7f59bb-c5c5w -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- exec                       | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | busybox-67b7f59bb-cxsjd -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- get pods -o                | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- exec                       | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | busybox-67b7f59bb-c5c5w                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- exec                       | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC |                     |
	|         | busybox-67b7f59bb-c5c5w -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- exec                       | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC | 26 Jun 23 18:47 UTC |
	|         | busybox-67b7f59bb-cxsjd                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-306845 -- exec                       | multinode-306845     | jenkins | v1.30.1 | 26 Jun 23 18:47 UTC |                     |
	|         | busybox-67b7f59bb-cxsjd -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 18:46:04
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 18:46:04.634524  421526 out.go:296] Setting OutFile to fd 1 ...
	I0626 18:46:04.634693  421526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:46:04.634704  421526 out.go:309] Setting ErrFile to fd 2...
	I0626 18:46:04.634710  421526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:46:04.634846  421526 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	I0626 18:46:04.635479  421526 out.go:303] Setting JSON to false
	I0626 18:46:04.636489  421526 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5315,"bootTime":1687799850,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 18:46:04.636557  421526 start.go:137] virtualization: kvm guest
	I0626 18:46:04.639030  421526 out.go:177] * [multinode-306845] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 18:46:04.640891  421526 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 18:46:04.642271  421526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 18:46:04.640929  421526 notify.go:220] Checking for updates...
	I0626 18:46:04.644874  421526 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:46:04.646236  421526 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	I0626 18:46:04.647688  421526 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 18:46:04.649011  421526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 18:46:04.650455  421526 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 18:46:04.671337  421526 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0626 18:46:04.671447  421526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:46:04.717092  421526 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:36 SystemTime:2023-06-26 18:46:04.708053551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:46:04.717205  421526 docker.go:294] overlay module found
	I0626 18:46:04.719245  421526 out.go:177] * Using the docker driver based on user configuration
	I0626 18:46:04.720710  421526 start.go:297] selected driver: docker
	I0626 18:46:04.720725  421526 start.go:954] validating driver "docker" against <nil>
	I0626 18:46:04.720739  421526 start.go:965] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 18:46:04.721541  421526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:46:04.767439  421526 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:36 SystemTime:2023-06-26 18:46:04.758870599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:46:04.767616  421526 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0626 18:46:04.767839  421526 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0626 18:46:04.769871  421526 out.go:177] * Using Docker driver with root privileges
	I0626 18:46:04.771274  421526 cni.go:84] Creating CNI manager for ""
	I0626 18:46:04.771286  421526 cni.go:137] 0 nodes found, recommending kindnet
	I0626 18:46:04.771294  421526 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0626 18:46:04.771305  421526 start_flags.go:319] config:
	{Name:multinode-306845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-306845 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:46:04.772834  421526 out.go:177] * Starting control plane node multinode-306845 in cluster multinode-306845
	I0626 18:46:04.774005  421526 cache.go:122] Beginning downloading kic base image for docker with crio
	I0626 18:46:04.775225  421526 out.go:177] * Pulling base image ...
	I0626 18:46:04.776365  421526 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 18:46:04.776395  421526 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon
	I0626 18:46:04.776450  421526 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 18:46:04.776467  421526 cache.go:57] Caching tarball of preloaded images
	I0626 18:46:04.776587  421526 preload.go:174] Found /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 18:46:04.776603  421526 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 18:46:04.776981  421526 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/config.json ...
	I0626 18:46:04.777010  421526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/config.json: {Name:mk0c90548274052fc1067d4caf1104c315e7d305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:46:04.792198  421526 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon, skipping pull
	I0626 18:46:04.792224  421526 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 exists in daemon, skipping load
	I0626 18:46:04.792244  421526 cache.go:195] Successfully downloaded all kic artifacts
	I0626 18:46:04.792277  421526 start.go:365] acquiring machines lock for multinode-306845: {Name:mk098c7d088ffaed430b3dea6d657410a7882e10 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:46:04.792388  421526 start.go:369] acquired machines lock for "multinode-306845" in 84.607µs
	I0626 18:46:04.792422  421526 start.go:93] Provisioning new machine with config: &{Name:multinode-306845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-306845 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 18:46:04.792507  421526 start.go:125] createHost starting for "" (driver="docker")
	I0626 18:46:04.794530  421526 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0626 18:46:04.794747  421526 start.go:159] libmachine.API.Create for "multinode-306845" (driver="docker")
	I0626 18:46:04.794781  421526 client.go:168] LocalClient.Create starting
	I0626 18:46:04.794867  421526 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem
	I0626 18:46:04.794901  421526 main.go:141] libmachine: Decoding PEM data...
	I0626 18:46:04.794927  421526 main.go:141] libmachine: Parsing certificate...
	I0626 18:46:04.795004  421526 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem
	I0626 18:46:04.795022  421526 main.go:141] libmachine: Decoding PEM data...
	I0626 18:46:04.795030  421526 main.go:141] libmachine: Parsing certificate...
	I0626 18:46:04.795355  421526 cli_runner.go:164] Run: docker network inspect multinode-306845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0626 18:46:04.811210  421526 cli_runner.go:211] docker network inspect multinode-306845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0626 18:46:04.811274  421526 network_create.go:281] running [docker network inspect multinode-306845] to gather additional debugging logs...
	I0626 18:46:04.811288  421526 cli_runner.go:164] Run: docker network inspect multinode-306845
	W0626 18:46:04.827375  421526 cli_runner.go:211] docker network inspect multinode-306845 returned with exit code 1
	I0626 18:46:04.827409  421526 network_create.go:284] error running [docker network inspect multinode-306845]: docker network inspect multinode-306845: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-306845 not found
	I0626 18:46:04.827424  421526 network_create.go:286] output of [docker network inspect multinode-306845]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-306845 not found
	
	** /stderr **
	I0626 18:46:04.827484  421526 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0626 18:46:04.843270  421526 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b66d9d19eb8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ec:f4:16:1b} reservation:<nil>}
	I0626 18:46:04.843756  421526 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0012cdf30}
	I0626 18:46:04.843785  421526 network_create.go:123] attempt to create docker network multinode-306845 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0626 18:46:04.843836  421526 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-306845 multinode-306845
	I0626 18:46:04.895627  421526 network_create.go:107] docker network multinode-306845 192.168.58.0/24 created
	I0626 18:46:04.895660  421526 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-306845" container
	I0626 18:46:04.895735  421526 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0626 18:46:04.911344  421526 cli_runner.go:164] Run: docker volume create multinode-306845 --label name.minikube.sigs.k8s.io=multinode-306845 --label created_by.minikube.sigs.k8s.io=true
	I0626 18:46:04.927902  421526 oci.go:103] Successfully created a docker volume multinode-306845
	I0626 18:46:04.927980  421526 cli_runner.go:164] Run: docker run --rm --name multinode-306845-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-306845 --entrypoint /usr/bin/test -v multinode-306845:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -d /var/lib
	I0626 18:46:05.459873  421526 oci.go:107] Successfully prepared a docker volume multinode-306845
	I0626 18:46:05.459923  421526 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 18:46:05.459944  421526 kic.go:190] Starting extracting preloaded images to volume ...
	I0626 18:46:05.460030  421526 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-306845:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -I lz4 -xf /preloaded.tar -C /extractDir
	I0626 18:46:10.271839  421526 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-306845:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -I lz4 -xf /preloaded.tar -C /extractDir: (4.811762791s)
	I0626 18:46:10.271888  421526 kic.go:199] duration metric: took 4.811928 seconds to extract preloaded images to volume
	W0626 18:46:10.272065  421526 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0626 18:46:10.272213  421526 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0626 18:46:10.318527  421526 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-306845 --name multinode-306845 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-306845 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-306845 --network multinode-306845 --ip 192.168.58.2 --volume multinode-306845:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953
	I0626 18:46:10.613912  421526 cli_runner.go:164] Run: docker container inspect multinode-306845 --format={{.State.Running}}
	I0626 18:46:10.630965  421526 cli_runner.go:164] Run: docker container inspect multinode-306845 --format={{.State.Status}}
	I0626 18:46:10.648994  421526 cli_runner.go:164] Run: docker exec multinode-306845 stat /var/lib/dpkg/alternatives/iptables
	I0626 18:46:10.688700  421526 oci.go:144] the created container "multinode-306845" has a running status.
	I0626 18:46:10.688730  421526 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa...
	I0626 18:46:10.814059  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0626 18:46:10.814118  421526 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0626 18:46:10.833217  421526 cli_runner.go:164] Run: docker container inspect multinode-306845 --format={{.State.Status}}
	I0626 18:46:10.851305  421526 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0626 18:46:10.851329  421526 kic_runner.go:114] Args: [docker exec --privileged multinode-306845 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0626 18:46:10.922035  421526 cli_runner.go:164] Run: docker container inspect multinode-306845 --format={{.State.Status}}
	I0626 18:46:10.942737  421526 machine.go:88] provisioning docker machine ...
	I0626 18:46:10.942788  421526 ubuntu.go:169] provisioning hostname "multinode-306845"
	I0626 18:46:10.942854  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:46:10.960601  421526 main.go:141] libmachine: Using SSH client type: native
	I0626 18:46:10.961066  421526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I0626 18:46:10.961085  421526 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-306845 && echo "multinode-306845" | sudo tee /etc/hostname
	I0626 18:46:10.961809  421526 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47428->127.0.0.1:33159: read: connection reset by peer
	I0626 18:46:14.099615  421526 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-306845
	
	I0626 18:46:14.099689  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:46:14.117098  421526 main.go:141] libmachine: Using SSH client type: native
	I0626 18:46:14.117561  421526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I0626 18:46:14.117587  421526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-306845' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-306845/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-306845' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 18:46:14.245288  421526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 18:46:14.245323  421526 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16761-330054/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-330054/.minikube}
	I0626 18:46:14.245350  421526 ubuntu.go:177] setting up certificates
	I0626 18:46:14.245361  421526 provision.go:83] configureAuth start
	I0626 18:46:14.245412  421526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-306845
	I0626 18:46:14.261142  421526 provision.go:138] copyHostCerts
	I0626 18:46:14.261191  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem
	I0626 18:46:14.261222  421526 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem, removing ...
	I0626 18:46:14.261248  421526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem
	I0626 18:46:14.261321  421526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem (1679 bytes)
	I0626 18:46:14.261407  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem
	I0626 18:46:14.261430  421526 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem, removing ...
	I0626 18:46:14.261439  421526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem
	I0626 18:46:14.261477  421526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem (1082 bytes)
	I0626 18:46:14.261524  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem
	I0626 18:46:14.261546  421526 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem, removing ...
	I0626 18:46:14.261555  421526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem
	I0626 18:46:14.261583  421526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem (1123 bytes)
	I0626 18:46:14.261639  421526 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem org=jenkins.multinode-306845 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-306845]
	I0626 18:46:14.367794  421526 provision.go:172] copyRemoteCerts
	I0626 18:46:14.367860  421526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 18:46:14.367897  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:46:14.384830  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa Username:docker}
	I0626 18:46:14.477408  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0626 18:46:14.477467  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0626 18:46:14.499651  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0626 18:46:14.499725  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0626 18:46:14.520819  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0626 18:46:14.520909  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0626 18:46:14.542368  421526 provision.go:86] duration metric: configureAuth took 296.979562ms
	I0626 18:46:14.542396  421526 ubuntu.go:193] setting minikube options for container-runtime
	I0626 18:46:14.542631  421526 config.go:182] Loaded profile config "multinode-306845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:46:14.542769  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:46:14.559739  421526 main.go:141] libmachine: Using SSH client type: native
	I0626 18:46:14.560147  421526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33159 <nil> <nil>}
	I0626 18:46:14.560164  421526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 18:46:14.781455  421526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 18:46:14.781483  421526 machine.go:91] provisioned docker machine in 3.838714153s
	I0626 18:46:14.781494  421526 client.go:171] LocalClient.Create took 9.986706966s
	I0626 18:46:14.781519  421526 start.go:167] duration metric: libmachine.API.Create for "multinode-306845" took 9.986771371s
	I0626 18:46:14.781529  421526 start.go:300] post-start starting for "multinode-306845" (driver="docker")
	I0626 18:46:14.781545  421526 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 18:46:14.781620  421526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 18:46:14.781673  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:46:14.798032  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa Username:docker}
	I0626 18:46:14.893607  421526 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 18:46:14.896710  421526 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0626 18:46:14.896730  421526 command_runner.go:130] > NAME="Ubuntu"
	I0626 18:46:14.896739  421526 command_runner.go:130] > VERSION_ID="22.04"
	I0626 18:46:14.896747  421526 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0626 18:46:14.896755  421526 command_runner.go:130] > VERSION_CODENAME=jammy
	I0626 18:46:14.896761  421526 command_runner.go:130] > ID=ubuntu
	I0626 18:46:14.896773  421526 command_runner.go:130] > ID_LIKE=debian
	I0626 18:46:14.896790  421526 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0626 18:46:14.896802  421526 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0626 18:46:14.896811  421526 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0626 18:46:14.896817  421526 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0626 18:46:14.896824  421526 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0626 18:46:14.896894  421526 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0626 18:46:14.896924  421526 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0626 18:46:14.896935  421526 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0626 18:46:14.896948  421526 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0626 18:46:14.896960  421526 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/addons for local assets ...
	I0626 18:46:14.897009  421526 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/files for local assets ...
	I0626 18:46:14.897088  421526 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem -> 3369352.pem in /etc/ssl/certs
	I0626 18:46:14.897100  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem -> /etc/ssl/certs/3369352.pem
	I0626 18:46:14.897182  421526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 18:46:14.904993  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem --> /etc/ssl/certs/3369352.pem (1708 bytes)
	I0626 18:46:14.926681  421526 start.go:303] post-start completed in 145.131976ms
	I0626 18:46:14.927183  421526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-306845
	I0626 18:46:14.943960  421526 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/config.json ...
	I0626 18:46:14.944190  421526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0626 18:46:14.944230  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:46:14.961249  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa Username:docker}
	I0626 18:46:15.049893  421526 command_runner.go:130] > 17%!
	(MISSING)I0626 18:46:15.049980  421526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0626 18:46:15.054091  421526 command_runner.go:130] > 242G
	I0626 18:46:15.054311  421526 start.go:128] duration metric: createHost completed in 10.261793672s
	I0626 18:46:15.054332  421526 start.go:83] releasing machines lock for "multinode-306845", held for 10.261926382s
	I0626 18:46:15.054405  421526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-306845
	I0626 18:46:15.070515  421526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 18:46:15.070620  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:46:15.070515  421526 ssh_runner.go:195] Run: cat /version.json
	I0626 18:46:15.070721  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:46:15.086887  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa Username:docker}
	I0626 18:46:15.088238  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa Username:docker}
	I0626 18:46:15.264768  421526 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0626 18:46:15.267050  421526 command_runner.go:130] > {"iso_version": "v1.30.1-1687455737-16703", "kicbase_version": "v0.0.39-1687538068-16731", "minikube_version": "v1.30.1", "commit": "b6c9c31d2704e7c9a54d66b0cdfb4e10e077c7d5"}
	I0626 18:46:15.267206  421526 ssh_runner.go:195] Run: systemctl --version
	I0626 18:46:15.271428  421526 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0626 18:46:15.271469  421526 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0626 18:46:15.271586  421526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 18:46:15.408363  421526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0626 18:46:15.412491  421526 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0626 18:46:15.412520  421526 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0626 18:46:15.412532  421526 command_runner.go:130] > Device: 37h/55d	Inode: 2344971     Links: 1
	I0626 18:46:15.412543  421526 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 18:46:15.412555  421526 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0626 18:46:15.412562  421526 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0626 18:46:15.412571  421526 command_runner.go:130] > Change: 2023-06-26 18:26:22.195668408 +0000
	I0626 18:46:15.412583  421526 command_runner.go:130] >  Birth: 2023-06-26 18:26:22.195668408 +0000
	I0626 18:46:15.412696  421526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 18:46:15.430148  421526 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0626 18:46:15.430229  421526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 18:46:15.455608  421526 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0626 18:46:15.455675  421526 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0626 18:46:15.455689  421526 start.go:466] detecting cgroup driver to use...
	I0626 18:46:15.455724  421526 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0626 18:46:15.455776  421526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 18:46:15.469929  421526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 18:46:15.480064  421526 docker.go:196] disabling cri-docker service (if available) ...
	I0626 18:46:15.480118  421526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 18:46:15.492430  421526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 18:46:15.505658  421526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 18:46:15.583130  421526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 18:46:15.663717  421526 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0626 18:46:15.663750  421526 docker.go:212] disabling docker service ...
	I0626 18:46:15.663809  421526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 18:46:15.681754  421526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 18:46:15.692471  421526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 18:46:15.764030  421526 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0626 18:46:15.764104  421526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 18:46:15.839875  421526 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0626 18:46:15.839957  421526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 18:46:15.850378  421526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 18:46:15.864656  421526 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0626 18:46:15.864698  421526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 18:46:15.864739  421526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:46:15.873311  421526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 18:46:15.873377  421526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:46:15.882427  421526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:46:15.891184  421526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:46:15.900151  421526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 18:46:15.908652  421526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 18:46:15.915848  421526 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0626 18:46:15.916493  421526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 18:46:15.924141  421526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 18:46:16.003098  421526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 18:46:16.106842  421526 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 18:46:16.106924  421526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 18:46:16.110543  421526 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0626 18:46:16.110575  421526 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0626 18:46:16.110583  421526 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0626 18:46:16.110595  421526 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 18:46:16.110603  421526 command_runner.go:130] > Access: 2023-06-26 18:46:16.092077067 +0000
	I0626 18:46:16.110609  421526 command_runner.go:130] > Modify: 2023-06-26 18:46:16.092077067 +0000
	I0626 18:46:16.110616  421526 command_runner.go:130] > Change: 2023-06-26 18:46:16.092077067 +0000
	I0626 18:46:16.110620  421526 command_runner.go:130] >  Birth: -
	I0626 18:46:16.110643  421526 start.go:534] Will wait 60s for crictl version
	I0626 18:46:16.110683  421526 ssh_runner.go:195] Run: which crictl
	I0626 18:46:16.113898  421526 command_runner.go:130] > /usr/bin/crictl
	I0626 18:46:16.113982  421526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 18:46:16.145945  421526 command_runner.go:130] > Version:  0.1.0
	I0626 18:46:16.145968  421526 command_runner.go:130] > RuntimeName:  cri-o
	I0626 18:46:16.145972  421526 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0626 18:46:16.145978  421526 command_runner.go:130] > RuntimeApiVersion:  v1
	I0626 18:46:16.145993  421526 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0626 18:46:16.146057  421526 ssh_runner.go:195] Run: crio --version
	I0626 18:46:16.179255  421526 command_runner.go:130] > crio version 1.24.6
	I0626 18:46:16.179279  421526 command_runner.go:130] > Version:          1.24.6
	I0626 18:46:16.179289  421526 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0626 18:46:16.179295  421526 command_runner.go:130] > GitTreeState:     clean
	I0626 18:46:16.179303  421526 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0626 18:46:16.179309  421526 command_runner.go:130] > GoVersion:        go1.18.2
	I0626 18:46:16.179315  421526 command_runner.go:130] > Compiler:         gc
	I0626 18:46:16.179322  421526 command_runner.go:130] > Platform:         linux/amd64
	I0626 18:46:16.179329  421526 command_runner.go:130] > Linkmode:         dynamic
	I0626 18:46:16.179340  421526 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 18:46:16.179351  421526 command_runner.go:130] > SeccompEnabled:   true
	I0626 18:46:16.179359  421526 command_runner.go:130] > AppArmorEnabled:  false
	I0626 18:46:16.181031  421526 ssh_runner.go:195] Run: crio --version
	I0626 18:46:16.214480  421526 command_runner.go:130] > crio version 1.24.6
	I0626 18:46:16.214503  421526 command_runner.go:130] > Version:          1.24.6
	I0626 18:46:16.214509  421526 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0626 18:46:16.214513  421526 command_runner.go:130] > GitTreeState:     clean
	I0626 18:46:16.214518  421526 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0626 18:46:16.214523  421526 command_runner.go:130] > GoVersion:        go1.18.2
	I0626 18:46:16.214527  421526 command_runner.go:130] > Compiler:         gc
	I0626 18:46:16.214531  421526 command_runner.go:130] > Platform:         linux/amd64
	I0626 18:46:16.214536  421526 command_runner.go:130] > Linkmode:         dynamic
	I0626 18:46:16.214544  421526 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 18:46:16.214548  421526 command_runner.go:130] > SeccompEnabled:   true
	I0626 18:46:16.214554  421526 command_runner.go:130] > AppArmorEnabled:  false
	I0626 18:46:16.216743  421526 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0626 18:46:16.218316  421526 cli_runner.go:164] Run: docker network inspect multinode-306845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0626 18:46:16.234509  421526 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0626 18:46:16.238220  421526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 18:46:16.248357  421526 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 18:46:16.248408  421526 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 18:46:16.295127  421526 command_runner.go:130] > {
	I0626 18:46:16.295152  421526 command_runner.go:130] >   "images": [
	I0626 18:46:16.295157  421526 command_runner.go:130] >     {
	I0626 18:46:16.295165  421526 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0626 18:46:16.295170  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.295175  421526 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0626 18:46:16.295179  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295183  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.295218  421526 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0626 18:46:16.295237  421526 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0626 18:46:16.295245  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295250  421526 command_runner.go:130] >       "size": "65249302",
	I0626 18:46:16.295257  421526 command_runner.go:130] >       "uid": null,
	I0626 18:46:16.295261  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.295269  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.295278  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.295287  421526 command_runner.go:130] >     },
	I0626 18:46:16.295293  421526 command_runner.go:130] >     {
	I0626 18:46:16.295306  421526 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0626 18:46:16.295316  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.295325  421526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0626 18:46:16.295333  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295338  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.295347  421526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0626 18:46:16.295354  421526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0626 18:46:16.295360  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295373  421526 command_runner.go:130] >       "size": "31470524",
	I0626 18:46:16.295383  421526 command_runner.go:130] >       "uid": null,
	I0626 18:46:16.295392  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.295402  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.295412  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.295418  421526 command_runner.go:130] >     },
	I0626 18:46:16.295427  421526 command_runner.go:130] >     {
	I0626 18:46:16.295437  421526 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0626 18:46:16.295446  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.295452  421526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0626 18:46:16.295458  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295463  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.295477  421526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0626 18:46:16.295492  421526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0626 18:46:16.295502  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295508  421526 command_runner.go:130] >       "size": "53621675",
	I0626 18:46:16.295518  421526 command_runner.go:130] >       "uid": null,
	I0626 18:46:16.295525  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.295534  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.295541  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.295550  421526 command_runner.go:130] >     },
	I0626 18:46:16.295558  421526 command_runner.go:130] >     {
	I0626 18:46:16.295564  421526 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0626 18:46:16.295574  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.295586  421526 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0626 18:46:16.295594  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295601  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.295615  421526 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0626 18:46:16.295629  421526 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0626 18:46:16.295641  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295649  421526 command_runner.go:130] >       "size": "297083935",
	I0626 18:46:16.295654  421526 command_runner.go:130] >       "uid": {
	I0626 18:46:16.295661  421526 command_runner.go:130] >         "value": "0"
	I0626 18:46:16.295666  421526 command_runner.go:130] >       },
	I0626 18:46:16.295675  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.295681  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.295688  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.295694  421526 command_runner.go:130] >     },
	I0626 18:46:16.295699  421526 command_runner.go:130] >     {
	I0626 18:46:16.295710  421526 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0626 18:46:16.295717  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.295726  421526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0626 18:46:16.295732  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295739  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.295754  421526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0626 18:46:16.295768  421526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0626 18:46:16.295775  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295783  421526 command_runner.go:130] >       "size": "122065872",
	I0626 18:46:16.295792  421526 command_runner.go:130] >       "uid": {
	I0626 18:46:16.295799  421526 command_runner.go:130] >         "value": "0"
	I0626 18:46:16.295805  421526 command_runner.go:130] >       },
	I0626 18:46:16.295815  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.295821  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.295831  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.295837  421526 command_runner.go:130] >     },
	I0626 18:46:16.295846  421526 command_runner.go:130] >     {
	I0626 18:46:16.295856  421526 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0626 18:46:16.295863  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.295870  421526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0626 18:46:16.295876  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295883  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.295895  421526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0626 18:46:16.295907  421526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0626 18:46:16.295913  421526 command_runner.go:130] >       ],
	I0626 18:46:16.295920  421526 command_runner.go:130] >       "size": "113919286",
	I0626 18:46:16.295931  421526 command_runner.go:130] >       "uid": {
	I0626 18:46:16.295938  421526 command_runner.go:130] >         "value": "0"
	I0626 18:46:16.295946  421526 command_runner.go:130] >       },
	I0626 18:46:16.295950  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.295954  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.295960  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.295966  421526 command_runner.go:130] >     },
	I0626 18:46:16.295973  421526 command_runner.go:130] >     {
	I0626 18:46:16.295983  421526 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0626 18:46:16.295993  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.296007  421526 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0626 18:46:16.296016  421526 command_runner.go:130] >       ],
	I0626 18:46:16.296023  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.296037  421526 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0626 18:46:16.296049  421526 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0626 18:46:16.296056  421526 command_runner.go:130] >       ],
	I0626 18:46:16.296063  421526 command_runner.go:130] >       "size": "72713623",
	I0626 18:46:16.296072  421526 command_runner.go:130] >       "uid": null,
	I0626 18:46:16.296080  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.296090  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.296100  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.296108  421526 command_runner.go:130] >     },
	I0626 18:46:16.296116  421526 command_runner.go:130] >     {
	I0626 18:46:16.296129  421526 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0626 18:46:16.296138  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.296144  421526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0626 18:46:16.296152  421526 command_runner.go:130] >       ],
	I0626 18:46:16.296159  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.296221  421526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0626 18:46:16.296236  421526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0626 18:46:16.296243  421526 command_runner.go:130] >       ],
	I0626 18:46:16.296250  421526 command_runner.go:130] >       "size": "59811126",
	I0626 18:46:16.296259  421526 command_runner.go:130] >       "uid": {
	I0626 18:46:16.296266  421526 command_runner.go:130] >         "value": "0"
	I0626 18:46:16.296276  421526 command_runner.go:130] >       },
	I0626 18:46:16.296283  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.296292  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.296299  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.296307  421526 command_runner.go:130] >     },
	I0626 18:46:16.296313  421526 command_runner.go:130] >     {
	I0626 18:46:16.296327  421526 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0626 18:46:16.296336  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.296344  421526 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0626 18:46:16.296353  421526 command_runner.go:130] >       ],
	I0626 18:46:16.296360  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.296373  421526 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0626 18:46:16.296386  421526 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0626 18:46:16.296392  421526 command_runner.go:130] >       ],
	I0626 18:46:16.296396  421526 command_runner.go:130] >       "size": "750414",
	I0626 18:46:16.296401  421526 command_runner.go:130] >       "uid": {
	I0626 18:46:16.296406  421526 command_runner.go:130] >         "value": "65535"
	I0626 18:46:16.296412  421526 command_runner.go:130] >       },
	I0626 18:46:16.296415  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.296420  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.296428  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.296436  421526 command_runner.go:130] >     }
	I0626 18:46:16.296442  421526 command_runner.go:130] >   ]
	I0626 18:46:16.296451  421526 command_runner.go:130] > }
	I0626 18:46:16.297456  421526 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 18:46:16.297476  421526 crio.go:415] Images already preloaded, skipping extraction
	I0626 18:46:16.297518  421526 ssh_runner.go:195] Run: sudo crictl images --output json
	I0626 18:46:16.328201  421526 command_runner.go:130] > {
	I0626 18:46:16.328227  421526 command_runner.go:130] >   "images": [
	I0626 18:46:16.328233  421526 command_runner.go:130] >     {
	I0626 18:46:16.328245  421526 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0626 18:46:16.328252  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.328262  421526 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0626 18:46:16.328269  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328276  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.328285  421526 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0626 18:46:16.328295  421526 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0626 18:46:16.328299  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328306  421526 command_runner.go:130] >       "size": "65249302",
	I0626 18:46:16.328310  421526 command_runner.go:130] >       "uid": null,
	I0626 18:46:16.328314  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.328322  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.328328  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.328332  421526 command_runner.go:130] >     },
	I0626 18:46:16.328336  421526 command_runner.go:130] >     {
	I0626 18:46:16.328341  421526 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0626 18:46:16.328345  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.328350  421526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0626 18:46:16.328353  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328357  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.328364  421526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0626 18:46:16.328371  421526 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0626 18:46:16.328375  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328400  421526 command_runner.go:130] >       "size": "31470524",
	I0626 18:46:16.328410  421526 command_runner.go:130] >       "uid": null,
	I0626 18:46:16.328414  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.328418  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.328422  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.328428  421526 command_runner.go:130] >     },
	I0626 18:46:16.328432  421526 command_runner.go:130] >     {
	I0626 18:46:16.328437  421526 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0626 18:46:16.328444  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.328449  421526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0626 18:46:16.328455  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328459  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.328468  421526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0626 18:46:16.328477  421526 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0626 18:46:16.328483  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328487  421526 command_runner.go:130] >       "size": "53621675",
	I0626 18:46:16.328498  421526 command_runner.go:130] >       "uid": null,
	I0626 18:46:16.328508  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.328515  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.328519  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.328525  421526 command_runner.go:130] >     },
	I0626 18:46:16.328529  421526 command_runner.go:130] >     {
	I0626 18:46:16.328537  421526 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0626 18:46:16.328543  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.328549  421526 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0626 18:46:16.328554  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328559  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.328567  421526 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0626 18:46:16.328576  421526 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0626 18:46:16.328588  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328593  421526 command_runner.go:130] >       "size": "297083935",
	I0626 18:46:16.328599  421526 command_runner.go:130] >       "uid": {
	I0626 18:46:16.328604  421526 command_runner.go:130] >         "value": "0"
	I0626 18:46:16.328607  421526 command_runner.go:130] >       },
	I0626 18:46:16.328613  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.328620  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.328626  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.328630  421526 command_runner.go:130] >     },
	I0626 18:46:16.328636  421526 command_runner.go:130] >     {
	I0626 18:46:16.328642  421526 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0626 18:46:16.328648  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.328653  421526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0626 18:46:16.328661  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328667  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.328677  421526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0626 18:46:16.328686  421526 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0626 18:46:16.328690  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328697  421526 command_runner.go:130] >       "size": "122065872",
	I0626 18:46:16.328700  421526 command_runner.go:130] >       "uid": {
	I0626 18:46:16.328706  421526 command_runner.go:130] >         "value": "0"
	I0626 18:46:16.328710  421526 command_runner.go:130] >       },
	I0626 18:46:16.328719  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.328725  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.328732  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.328737  421526 command_runner.go:130] >     },
	I0626 18:46:16.328741  421526 command_runner.go:130] >     {
	I0626 18:46:16.328749  421526 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0626 18:46:16.328755  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.328760  421526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0626 18:46:16.328767  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328771  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.328778  421526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0626 18:46:16.328787  421526 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0626 18:46:16.328793  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328798  421526 command_runner.go:130] >       "size": "113919286",
	I0626 18:46:16.328803  421526 command_runner.go:130] >       "uid": {
	I0626 18:46:16.328807  421526 command_runner.go:130] >         "value": "0"
	I0626 18:46:16.328813  421526 command_runner.go:130] >       },
	I0626 18:46:16.328817  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.328823  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.328827  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.328835  421526 command_runner.go:130] >     },
	I0626 18:46:16.328841  421526 command_runner.go:130] >     {
	I0626 18:46:16.328847  421526 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0626 18:46:16.328853  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.328871  421526 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0626 18:46:16.328881  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328888  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.328901  421526 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0626 18:46:16.328910  421526 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0626 18:46:16.328916  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328920  421526 command_runner.go:130] >       "size": "72713623",
	I0626 18:46:16.328926  421526 command_runner.go:130] >       "uid": null,
	I0626 18:46:16.328931  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.328937  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.328942  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.328948  421526 command_runner.go:130] >     },
	I0626 18:46:16.328951  421526 command_runner.go:130] >     {
	I0626 18:46:16.328962  421526 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0626 18:46:16.328971  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.328978  421526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0626 18:46:16.328982  421526 command_runner.go:130] >       ],
	I0626 18:46:16.328988  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.329085  421526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0626 18:46:16.329103  421526 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0626 18:46:16.329107  421526 command_runner.go:130] >       ],
	I0626 18:46:16.329112  421526 command_runner.go:130] >       "size": "59811126",
	I0626 18:46:16.329115  421526 command_runner.go:130] >       "uid": {
	I0626 18:46:16.329120  421526 command_runner.go:130] >         "value": "0"
	I0626 18:46:16.329123  421526 command_runner.go:130] >       },
	I0626 18:46:16.329128  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.329135  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.329139  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.329143  421526 command_runner.go:130] >     },
	I0626 18:46:16.329146  421526 command_runner.go:130] >     {
	I0626 18:46:16.329152  421526 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0626 18:46:16.329159  421526 command_runner.go:130] >       "repoTags": [
	I0626 18:46:16.329165  421526 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0626 18:46:16.329171  421526 command_runner.go:130] >       ],
	I0626 18:46:16.329176  421526 command_runner.go:130] >       "repoDigests": [
	I0626 18:46:16.329184  421526 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0626 18:46:16.329193  421526 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0626 18:46:16.329199  421526 command_runner.go:130] >       ],
	I0626 18:46:16.329204  421526 command_runner.go:130] >       "size": "750414",
	I0626 18:46:16.329210  421526 command_runner.go:130] >       "uid": {
	I0626 18:46:16.329214  421526 command_runner.go:130] >         "value": "65535"
	I0626 18:46:16.329219  421526 command_runner.go:130] >       },
	I0626 18:46:16.329224  421526 command_runner.go:130] >       "username": "",
	I0626 18:46:16.329232  421526 command_runner.go:130] >       "spec": null,
	I0626 18:46:16.329239  421526 command_runner.go:130] >       "pinned": false
	I0626 18:46:16.329242  421526 command_runner.go:130] >     }
	I0626 18:46:16.329248  421526 command_runner.go:130] >   ]
	I0626 18:46:16.329252  421526 command_runner.go:130] > }
	I0626 18:46:16.330500  421526 crio.go:496] all images are preloaded for cri-o runtime.
	I0626 18:46:16.330520  421526 cache_images.go:84] Images are preloaded, skipping loading
	I0626 18:46:16.330625  421526 ssh_runner.go:195] Run: crio config
	I0626 18:46:16.368646  421526 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0626 18:46:16.368674  421526 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0626 18:46:16.368683  421526 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0626 18:46:16.368688  421526 command_runner.go:130] > #
	I0626 18:46:16.368698  421526 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0626 18:46:16.368707  421526 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0626 18:46:16.368718  421526 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0626 18:46:16.368731  421526 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0626 18:46:16.368736  421526 command_runner.go:130] > # reload'.
	I0626 18:46:16.368746  421526 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0626 18:46:16.368759  421526 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0626 18:46:16.368781  421526 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0626 18:46:16.368790  421526 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0626 18:46:16.368796  421526 command_runner.go:130] > [crio]
	I0626 18:46:16.368808  421526 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0626 18:46:16.368821  421526 command_runner.go:130] > # containers images, in this directory.
	I0626 18:46:16.368839  421526 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0626 18:46:16.368856  421526 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0626 18:46:16.368878  421526 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0626 18:46:16.368890  421526 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0626 18:46:16.368903  421526 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0626 18:46:16.368911  421526 command_runner.go:130] > # storage_driver = "vfs"
	I0626 18:46:16.368925  421526 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0626 18:46:16.368934  421526 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0626 18:46:16.368940  421526 command_runner.go:130] > # storage_option = [
	I0626 18:46:16.368944  421526 command_runner.go:130] > # ]
	I0626 18:46:16.368950  421526 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0626 18:46:16.368958  421526 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0626 18:46:16.368964  421526 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0626 18:46:16.368976  421526 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0626 18:46:16.368986  421526 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0626 18:46:16.368997  421526 command_runner.go:130] > # always happen on a node reboot
	I0626 18:46:16.369004  421526 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0626 18:46:16.369016  421526 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0626 18:46:16.369028  421526 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0626 18:46:16.369053  421526 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0626 18:46:16.369065  421526 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0626 18:46:16.369083  421526 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0626 18:46:16.369126  421526 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0626 18:46:16.369138  421526 command_runner.go:130] > # internal_wipe = true
	I0626 18:46:16.369148  421526 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0626 18:46:16.369160  421526 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0626 18:46:16.369171  421526 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0626 18:46:16.369180  421526 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0626 18:46:16.369194  421526 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0626 18:46:16.369204  421526 command_runner.go:130] > [crio.api]
	I0626 18:46:16.369212  421526 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0626 18:46:16.369223  421526 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0626 18:46:16.369234  421526 command_runner.go:130] > # IP address on which the stream server will listen.
	I0626 18:46:16.369245  421526 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0626 18:46:16.369261  421526 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0626 18:46:16.369270  421526 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0626 18:46:16.369279  421526 command_runner.go:130] > # stream_port = "0"
	I0626 18:46:16.369295  421526 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0626 18:46:16.369322  421526 command_runner.go:130] > # stream_enable_tls = false
	I0626 18:46:16.369335  421526 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0626 18:46:16.369341  421526 command_runner.go:130] > # stream_idle_timeout = ""
	I0626 18:46:16.369349  421526 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0626 18:46:16.369372  421526 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0626 18:46:16.369382  421526 command_runner.go:130] > # minutes.
	I0626 18:46:16.369388  421526 command_runner.go:130] > # stream_tls_cert = ""
	I0626 18:46:16.369402  421526 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0626 18:46:16.369412  421526 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0626 18:46:16.369422  421526 command_runner.go:130] > # stream_tls_key = ""
	I0626 18:46:16.369431  421526 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0626 18:46:16.369452  421526 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0626 18:46:16.369464  421526 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0626 18:46:16.369474  421526 command_runner.go:130] > # stream_tls_ca = ""
	I0626 18:46:16.369491  421526 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 18:46:16.369501  421526 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0626 18:46:16.369511  421526 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 18:46:16.369523  421526 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0626 18:46:16.369608  421526 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0626 18:46:16.369624  421526 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0626 18:46:16.369630  421526 command_runner.go:130] > [crio.runtime]
	I0626 18:46:16.369639  421526 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0626 18:46:16.369648  421526 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0626 18:46:16.369658  421526 command_runner.go:130] > # "nofile=1024:2048"
	I0626 18:46:16.369672  421526 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0626 18:46:16.369682  421526 command_runner.go:130] > # default_ulimits = [
	I0626 18:46:16.369687  421526 command_runner.go:130] > # ]
	I0626 18:46:16.369697  421526 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0626 18:46:16.369707  421526 command_runner.go:130] > # no_pivot = false
	I0626 18:46:16.369717  421526 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0626 18:46:16.369730  421526 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0626 18:46:16.369743  421526 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0626 18:46:16.369752  421526 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0626 18:46:16.369761  421526 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0626 18:46:16.369769  421526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 18:46:16.369788  421526 command_runner.go:130] > # conmon = ""
	I0626 18:46:16.369794  421526 command_runner.go:130] > # Cgroup setting for conmon
	I0626 18:46:16.369804  421526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0626 18:46:16.369816  421526 command_runner.go:130] > conmon_cgroup = "pod"
	I0626 18:46:16.369826  421526 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0626 18:46:16.369837  421526 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0626 18:46:16.369846  421526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 18:46:16.369854  421526 command_runner.go:130] > # conmon_env = [
	I0626 18:46:16.369858  421526 command_runner.go:130] > # ]
	I0626 18:46:16.369867  421526 command_runner.go:130] > # Additional environment variables to set for all the
	I0626 18:46:16.369876  421526 command_runner.go:130] > # containers. These are overridden if set in the
	I0626 18:46:16.369888  421526 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0626 18:46:16.369895  421526 command_runner.go:130] > # default_env = [
	I0626 18:46:16.369903  421526 command_runner.go:130] > # ]
	I0626 18:46:16.369912  421526 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0626 18:46:16.369922  421526 command_runner.go:130] > # selinux = false
	I0626 18:46:16.369932  421526 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0626 18:46:16.369944  421526 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0626 18:46:16.369963  421526 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0626 18:46:16.369972  421526 command_runner.go:130] > # seccomp_profile = ""
	I0626 18:46:16.369979  421526 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0626 18:46:16.369987  421526 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0626 18:46:16.369995  421526 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0626 18:46:16.369999  421526 command_runner.go:130] > # which might increase security.
	I0626 18:46:16.370004  421526 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0626 18:46:16.370009  421526 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0626 18:46:16.370016  421526 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0626 18:46:16.370024  421526 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0626 18:46:16.370032  421526 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0626 18:46:16.370039  421526 command_runner.go:130] > # This option supports live configuration reload.
	I0626 18:46:16.370044  421526 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0626 18:46:16.370054  421526 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0626 18:46:16.370061  421526 command_runner.go:130] > # the cgroup blockio controller.
	I0626 18:46:16.370065  421526 command_runner.go:130] > # blockio_config_file = ""
	I0626 18:46:16.370073  421526 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0626 18:46:16.370080  421526 command_runner.go:130] > # irqbalance daemon.
	I0626 18:46:16.370087  421526 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0626 18:46:16.370096  421526 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0626 18:46:16.370105  421526 command_runner.go:130] > # This option supports live configuration reload.
	I0626 18:46:16.370109  421526 command_runner.go:130] > # rdt_config_file = ""
	I0626 18:46:16.370116  421526 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0626 18:46:16.370123  421526 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0626 18:46:16.370129  421526 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0626 18:46:16.370135  421526 command_runner.go:130] > # separate_pull_cgroup = ""
	I0626 18:46:16.370142  421526 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0626 18:46:16.370151  421526 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0626 18:46:16.370157  421526 command_runner.go:130] > # will be added.
	I0626 18:46:16.370161  421526 command_runner.go:130] > # default_capabilities = [
	I0626 18:46:16.370167  421526 command_runner.go:130] > # 	"CHOWN",
	I0626 18:46:16.370171  421526 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0626 18:46:16.370177  421526 command_runner.go:130] > # 	"FSETID",
	I0626 18:46:16.370181  421526 command_runner.go:130] > # 	"FOWNER",
	I0626 18:46:16.370187  421526 command_runner.go:130] > # 	"SETGID",
	I0626 18:46:16.370190  421526 command_runner.go:130] > # 	"SETUID",
	I0626 18:46:16.370199  421526 command_runner.go:130] > # 	"SETPCAP",
	I0626 18:46:16.370203  421526 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0626 18:46:16.370209  421526 command_runner.go:130] > # 	"KILL",
	I0626 18:46:16.370213  421526 command_runner.go:130] > # ]
	I0626 18:46:16.370223  421526 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0626 18:46:16.370232  421526 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0626 18:46:16.370239  421526 command_runner.go:130] > # add_inheritable_capabilities = true
	I0626 18:46:16.370245  421526 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0626 18:46:16.370252  421526 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 18:46:16.370258  421526 command_runner.go:130] > # default_sysctls = [
	I0626 18:46:16.370262  421526 command_runner.go:130] > # ]
	I0626 18:46:16.370267  421526 command_runner.go:130] > # List of devices on the host that a
	I0626 18:46:16.370275  421526 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0626 18:46:16.370281  421526 command_runner.go:130] > # allowed_devices = [
	I0626 18:46:16.370285  421526 command_runner.go:130] > # 	"/dev/fuse",
	I0626 18:46:16.370291  421526 command_runner.go:130] > # ]
	I0626 18:46:16.370295  421526 command_runner.go:130] > # List of additional devices. specified as
	I0626 18:46:16.370356  421526 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0626 18:46:16.370370  421526 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0626 18:46:16.370375  421526 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 18:46:16.370381  421526 command_runner.go:130] > # additional_devices = [
	I0626 18:46:16.370384  421526 command_runner.go:130] > # ]
	I0626 18:46:16.370389  421526 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0626 18:46:16.370394  421526 command_runner.go:130] > # cdi_spec_dirs = [
	I0626 18:46:16.370398  421526 command_runner.go:130] > # 	"/etc/cdi",
	I0626 18:46:16.370402  421526 command_runner.go:130] > # 	"/var/run/cdi",
	I0626 18:46:16.370405  421526 command_runner.go:130] > # ]
	I0626 18:46:16.370413  421526 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0626 18:46:16.370422  421526 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0626 18:46:16.370426  421526 command_runner.go:130] > # Defaults to false.
	I0626 18:46:16.370432  421526 command_runner.go:130] > # device_ownership_from_security_context = false
	I0626 18:46:16.370438  421526 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0626 18:46:16.370452  421526 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0626 18:46:16.370456  421526 command_runner.go:130] > # hooks_dir = [
	I0626 18:46:16.370460  421526 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0626 18:46:16.370463  421526 command_runner.go:130] > # ]
	I0626 18:46:16.370471  421526 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0626 18:46:16.370477  421526 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0626 18:46:16.370484  421526 command_runner.go:130] > # its default mounts from the following two files:
	I0626 18:46:16.370490  421526 command_runner.go:130] > #
	I0626 18:46:16.370496  421526 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0626 18:46:16.370504  421526 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0626 18:46:16.370512  421526 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0626 18:46:16.370516  421526 command_runner.go:130] > #
	I0626 18:46:16.370522  421526 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0626 18:46:16.370531  421526 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0626 18:46:16.370539  421526 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0626 18:46:16.370546  421526 command_runner.go:130] > #      only add mounts it finds in this file.
	I0626 18:46:16.370549  421526 command_runner.go:130] > #
	I0626 18:46:16.370554  421526 command_runner.go:130] > # default_mounts_file = ""
	I0626 18:46:16.370559  421526 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0626 18:46:16.370568  421526 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0626 18:46:16.370572  421526 command_runner.go:130] > # pids_limit = 0
	I0626 18:46:16.370580  421526 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0626 18:46:16.370594  421526 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0626 18:46:16.370602  421526 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0626 18:46:16.370611  421526 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0626 18:46:16.370617  421526 command_runner.go:130] > # log_size_max = -1
	I0626 18:46:16.370626  421526 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0626 18:46:16.370636  421526 command_runner.go:130] > # log_to_journald = false
	I0626 18:46:16.370644  421526 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0626 18:46:16.370651  421526 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0626 18:46:16.370660  421526 command_runner.go:130] > # Path to directory for container attach sockets.
	I0626 18:46:16.370667  421526 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0626 18:46:16.370672  421526 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0626 18:46:16.370679  421526 command_runner.go:130] > # bind_mount_prefix = ""
	I0626 18:46:16.370684  421526 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0626 18:46:16.370690  421526 command_runner.go:130] > # read_only = false
	I0626 18:46:16.370696  421526 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0626 18:46:16.370704  421526 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0626 18:46:16.370710  421526 command_runner.go:130] > # live configuration reload.
	I0626 18:46:16.370714  421526 command_runner.go:130] > # log_level = "info"
	I0626 18:46:16.370724  421526 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0626 18:46:16.370732  421526 command_runner.go:130] > # This option supports live configuration reload.
	I0626 18:46:16.370736  421526 command_runner.go:130] > # log_filter = ""
	I0626 18:46:16.370743  421526 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0626 18:46:16.370751  421526 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0626 18:46:16.370757  421526 command_runner.go:130] > # separated by comma.
	I0626 18:46:16.370762  421526 command_runner.go:130] > # uid_mappings = ""
	I0626 18:46:16.370771  421526 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0626 18:46:16.370779  421526 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0626 18:46:16.370786  421526 command_runner.go:130] > # separated by comma.
	I0626 18:46:16.370790  421526 command_runner.go:130] > # gid_mappings = ""
	I0626 18:46:16.370798  421526 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0626 18:46:16.370806  421526 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 18:46:16.370814  421526 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 18:46:16.370820  421526 command_runner.go:130] > # minimum_mappable_uid = -1
	I0626 18:46:16.370826  421526 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0626 18:46:16.370834  421526 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 18:46:16.370842  421526 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 18:46:16.370852  421526 command_runner.go:130] > # minimum_mappable_gid = -1
	I0626 18:46:16.370858  421526 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0626 18:46:16.370866  421526 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0626 18:46:16.370874  421526 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0626 18:46:16.370880  421526 command_runner.go:130] > # ctr_stop_timeout = 30
	I0626 18:46:16.370891  421526 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0626 18:46:16.370914  421526 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0626 18:46:16.370925  421526 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0626 18:46:16.370930  421526 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0626 18:46:16.370933  421526 command_runner.go:130] > # drop_infra_ctr = true
	I0626 18:46:16.370939  421526 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0626 18:46:16.370944  421526 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0626 18:46:16.370951  421526 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0626 18:46:16.370957  421526 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0626 18:46:16.370963  421526 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0626 18:46:16.370968  421526 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0626 18:46:16.370974  421526 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0626 18:46:16.370980  421526 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0626 18:46:16.370989  421526 command_runner.go:130] > # pinns_path = ""
	I0626 18:46:16.370995  421526 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0626 18:46:16.371003  421526 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0626 18:46:16.371009  421526 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0626 18:46:16.371016  421526 command_runner.go:130] > # default_runtime = "runc"
	I0626 18:46:16.371021  421526 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0626 18:46:16.371030  421526 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0626 18:46:16.371039  421526 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0626 18:46:16.371046  421526 command_runner.go:130] > # creation as a file is not desired either.
	I0626 18:46:16.371054  421526 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0626 18:46:16.371061  421526 command_runner.go:130] > # the hostname is being managed dynamically.
	I0626 18:46:16.371065  421526 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0626 18:46:16.371069  421526 command_runner.go:130] > # ]
	I0626 18:46:16.371076  421526 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0626 18:46:16.371084  421526 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0626 18:46:16.371091  421526 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0626 18:46:16.371099  421526 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0626 18:46:16.371102  421526 command_runner.go:130] > #
	I0626 18:46:16.371109  421526 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0626 18:46:16.371116  421526 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0626 18:46:16.371120  421526 command_runner.go:130] > #  runtime_type = "oci"
	I0626 18:46:16.371127  421526 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0626 18:46:16.371132  421526 command_runner.go:130] > #  privileged_without_host_devices = false
	I0626 18:46:16.371138  421526 command_runner.go:130] > #  allowed_annotations = []
	I0626 18:46:16.371141  421526 command_runner.go:130] > # Where:
	I0626 18:46:16.371146  421526 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0626 18:46:16.371155  421526 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0626 18:46:16.371164  421526 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0626 18:46:16.371169  421526 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0626 18:46:16.371175  421526 command_runner.go:130] > #   in $PATH.
	I0626 18:46:16.371181  421526 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0626 18:46:16.371188  421526 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0626 18:46:16.371193  421526 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0626 18:46:16.371199  421526 command_runner.go:130] > #   state.
	I0626 18:46:16.371205  421526 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0626 18:46:16.371213  421526 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0626 18:46:16.371222  421526 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0626 18:46:16.371229  421526 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0626 18:46:16.371235  421526 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0626 18:46:16.371243  421526 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0626 18:46:16.371248  421526 command_runner.go:130] > #   The currently recognized values are:
	I0626 18:46:16.371257  421526 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0626 18:46:16.371264  421526 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0626 18:46:16.371272  421526 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0626 18:46:16.371278  421526 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0626 18:46:16.371287  421526 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0626 18:46:16.371293  421526 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0626 18:46:16.371301  421526 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0626 18:46:16.371307  421526 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0626 18:46:16.371314  421526 command_runner.go:130] > #   should be moved to the container's cgroup
	I0626 18:46:16.371318  421526 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0626 18:46:16.371325  421526 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0626 18:46:16.371329  421526 command_runner.go:130] > runtime_type = "oci"
	I0626 18:46:16.371335  421526 command_runner.go:130] > runtime_root = "/run/runc"
	I0626 18:46:16.371341  421526 command_runner.go:130] > runtime_config_path = ""
	I0626 18:46:16.371348  421526 command_runner.go:130] > monitor_path = ""
	I0626 18:46:16.371352  421526 command_runner.go:130] > monitor_cgroup = ""
	I0626 18:46:16.371355  421526 command_runner.go:130] > monitor_exec_cgroup = ""
	I0626 18:46:16.371419  421526 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0626 18:46:16.371428  421526 command_runner.go:130] > # running containers
	I0626 18:46:16.371432  421526 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0626 18:46:16.371438  421526 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0626 18:46:16.371449  421526 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0626 18:46:16.371457  421526 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0626 18:46:16.371462  421526 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0626 18:46:16.371466  421526 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0626 18:46:16.371471  421526 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0626 18:46:16.371477  421526 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0626 18:46:16.371482  421526 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0626 18:46:16.371489  421526 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0626 18:46:16.371495  421526 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0626 18:46:16.371502  421526 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0626 18:46:16.371518  421526 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0626 18:46:16.371527  421526 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0626 18:46:16.371540  421526 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0626 18:46:16.371548  421526 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0626 18:46:16.371556  421526 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0626 18:46:16.371566  421526 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0626 18:46:16.371571  421526 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0626 18:46:16.371580  421526 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0626 18:46:16.371584  421526 command_runner.go:130] > # Example:
	I0626 18:46:16.371589  421526 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0626 18:46:16.371594  421526 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0626 18:46:16.371599  421526 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0626 18:46:16.371606  421526 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0626 18:46:16.371610  421526 command_runner.go:130] > # cpuset = 0
	I0626 18:46:16.371616  421526 command_runner.go:130] > # cpushares = "0-1"
	I0626 18:46:16.371621  421526 command_runner.go:130] > # Where:
	I0626 18:46:16.371628  421526 command_runner.go:130] > # The workload name is workload-type.
	I0626 18:46:16.371641  421526 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0626 18:46:16.371656  421526 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0626 18:46:16.371664  421526 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0626 18:46:16.371672  421526 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0626 18:46:16.371679  421526 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0626 18:46:16.371683  421526 command_runner.go:130] > # 
	I0626 18:46:16.371695  421526 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0626 18:46:16.371701  421526 command_runner.go:130] > #
	I0626 18:46:16.371710  421526 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0626 18:46:16.371718  421526 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0626 18:46:16.371724  421526 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0626 18:46:16.371732  421526 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0626 18:46:16.371738  421526 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0626 18:46:16.371742  421526 command_runner.go:130] > [crio.image]
	I0626 18:46:16.371747  421526 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0626 18:46:16.371752  421526 command_runner.go:130] > # default_transport = "docker://"
	I0626 18:46:16.371758  421526 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0626 18:46:16.371766  421526 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0626 18:46:16.371771  421526 command_runner.go:130] > # global_auth_file = ""
	I0626 18:46:16.371781  421526 command_runner.go:130] > # The image used to instantiate infra containers.
	I0626 18:46:16.371789  421526 command_runner.go:130] > # This option supports live configuration reload.
	I0626 18:46:16.371793  421526 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0626 18:46:16.371802  421526 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0626 18:46:16.371807  421526 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0626 18:46:16.371814  421526 command_runner.go:130] > # This option supports live configuration reload.
	I0626 18:46:16.371819  421526 command_runner.go:130] > # pause_image_auth_file = ""
	I0626 18:46:16.371825  421526 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0626 18:46:16.371830  421526 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0626 18:46:16.371837  421526 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0626 18:46:16.371844  421526 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0626 18:46:16.371850  421526 command_runner.go:130] > # pause_command = "/pause"
	I0626 18:46:16.371856  421526 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0626 18:46:16.371864  421526 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0626 18:46:16.371870  421526 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0626 18:46:16.371878  421526 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0626 18:46:16.371883  421526 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0626 18:46:16.371890  421526 command_runner.go:130] > # signature_policy = ""
	I0626 18:46:16.371916  421526 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0626 18:46:16.371924  421526 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0626 18:46:16.371929  421526 command_runner.go:130] > # changing them here.
	I0626 18:46:16.371933  421526 command_runner.go:130] > # insecure_registries = [
	I0626 18:46:16.371936  421526 command_runner.go:130] > # ]
	I0626 18:46:16.371942  421526 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0626 18:46:16.371949  421526 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0626 18:46:16.371955  421526 command_runner.go:130] > # image_volumes = "mkdir"
	I0626 18:46:16.371962  421526 command_runner.go:130] > # Temporary directory to use for storing big files
	I0626 18:46:16.371966  421526 command_runner.go:130] > # big_files_temporary_dir = ""
	I0626 18:46:16.371974  421526 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0626 18:46:16.371981  421526 command_runner.go:130] > # CNI plugins.
	I0626 18:46:16.371985  421526 command_runner.go:130] > [crio.network]
	I0626 18:46:16.371992  421526 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0626 18:46:16.371999  421526 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0626 18:46:16.372006  421526 command_runner.go:130] > # cni_default_network = ""
	I0626 18:46:16.372012  421526 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0626 18:46:16.372022  421526 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0626 18:46:16.372034  421526 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0626 18:46:16.372040  421526 command_runner.go:130] > # plugin_dirs = [
	I0626 18:46:16.372045  421526 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0626 18:46:16.372051  421526 command_runner.go:130] > # ]
	I0626 18:46:16.372063  421526 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0626 18:46:16.372070  421526 command_runner.go:130] > [crio.metrics]
	I0626 18:46:16.372075  421526 command_runner.go:130] > # Globally enable or disable metrics support.
	I0626 18:46:16.372082  421526 command_runner.go:130] > # enable_metrics = false
	I0626 18:46:16.372086  421526 command_runner.go:130] > # Specify enabled metrics collectors.
	I0626 18:46:16.372091  421526 command_runner.go:130] > # Per default all metrics are enabled.
	I0626 18:46:16.372108  421526 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0626 18:46:16.372117  421526 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0626 18:46:16.372128  421526 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0626 18:46:16.372138  421526 command_runner.go:130] > # metrics_collectors = [
	I0626 18:46:16.372143  421526 command_runner.go:130] > # 	"operations",
	I0626 18:46:16.372152  421526 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0626 18:46:16.372162  421526 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0626 18:46:16.372168  421526 command_runner.go:130] > # 	"operations_errors",
	I0626 18:46:16.372177  421526 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0626 18:46:16.372181  421526 command_runner.go:130] > # 	"image_pulls_by_name",
	I0626 18:46:16.372186  421526 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0626 18:46:16.372191  421526 command_runner.go:130] > # 	"image_pulls_failures",
	I0626 18:46:16.372196  421526 command_runner.go:130] > # 	"image_pulls_successes",
	I0626 18:46:16.372205  421526 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0626 18:46:16.372209  421526 command_runner.go:130] > # 	"image_layer_reuse",
	I0626 18:46:16.372215  421526 command_runner.go:130] > # 	"containers_oom_total",
	I0626 18:46:16.372219  421526 command_runner.go:130] > # 	"containers_oom",
	I0626 18:46:16.372227  421526 command_runner.go:130] > # 	"processes_defunct",
	I0626 18:46:16.372231  421526 command_runner.go:130] > # 	"operations_total",
	I0626 18:46:16.372236  421526 command_runner.go:130] > # 	"operations_latency_seconds",
	I0626 18:46:16.372241  421526 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0626 18:46:16.372247  421526 command_runner.go:130] > # 	"operations_errors_total",
	I0626 18:46:16.372251  421526 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0626 18:46:16.372256  421526 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0626 18:46:16.372265  421526 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0626 18:46:16.372271  421526 command_runner.go:130] > # 	"image_pulls_success_total",
	I0626 18:46:16.372284  421526 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0626 18:46:16.372294  421526 command_runner.go:130] > # 	"containers_oom_count_total",
	I0626 18:46:16.372303  421526 command_runner.go:130] > # ]
	I0626 18:46:16.372314  421526 command_runner.go:130] > # The port on which the metrics server will listen.
	I0626 18:46:16.372324  421526 command_runner.go:130] > # metrics_port = 9090
	I0626 18:46:16.372333  421526 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0626 18:46:16.372339  421526 command_runner.go:130] > # metrics_socket = ""
	I0626 18:46:16.372344  421526 command_runner.go:130] > # The certificate for the secure metrics server.
	I0626 18:46:16.372352  421526 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0626 18:46:16.372360  421526 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0626 18:46:16.372370  421526 command_runner.go:130] > # certificate on any modification event.
	I0626 18:46:16.372379  421526 command_runner.go:130] > # metrics_cert = ""
	I0626 18:46:16.372391  421526 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0626 18:46:16.372403  421526 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0626 18:46:16.372412  421526 command_runner.go:130] > # metrics_key = ""
	I0626 18:46:16.372425  421526 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0626 18:46:16.372434  421526 command_runner.go:130] > [crio.tracing]
	I0626 18:46:16.372500  421526 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0626 18:46:16.372534  421526 command_runner.go:130] > # enable_tracing = false
	I0626 18:46:16.372547  421526 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0626 18:46:16.372558  421526 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0626 18:46:16.372569  421526 command_runner.go:130] > # Number of samples to collect per million spans.
	I0626 18:46:16.372585  421526 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0626 18:46:16.372601  421526 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0626 18:46:16.372613  421526 command_runner.go:130] > [crio.stats]
	I0626 18:46:16.372625  421526 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0626 18:46:16.372638  421526 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0626 18:46:16.372648  421526 command_runner.go:130] > # stats_collection_period = 0
	I0626 18:46:16.372696  421526 command_runner.go:130] ! time="2023-06-26 18:46:16.365237776Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0626 18:46:16.372717  421526 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0626 18:46:16.372810  421526 cni.go:84] Creating CNI manager for ""
	I0626 18:46:16.372830  421526 cni.go:137] 1 nodes found, recommending kindnet
	I0626 18:46:16.372842  421526 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 18:46:16.372887  421526 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-306845 NodeName:multinode-306845 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 18:46:16.373118  421526 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-306845"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 18:46:16.373229  421526 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-306845 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-306845 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 18:46:16.373297  421526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 18:46:16.380975  421526 command_runner.go:130] > kubeadm
	I0626 18:46:16.380992  421526 command_runner.go:130] > kubectl
	I0626 18:46:16.380999  421526 command_runner.go:130] > kubelet
	I0626 18:46:16.381715  421526 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 18:46:16.381794  421526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0626 18:46:16.389731  421526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0626 18:46:16.405492  421526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 18:46:16.420880  421526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0626 18:46:16.436474  421526 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0626 18:46:16.439631  421526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 18:46:16.449375  421526 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845 for IP: 192.168.58.2
	I0626 18:46:16.449423  421526 certs.go:190] acquiring lock for shared ca certs: {Name:mk5dcd9e05f1fa507f67df494d102e50ef2554ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:46:16.449581  421526 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key
	I0626 18:46:16.449623  421526 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key
	I0626 18:46:16.449667  421526 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.key
	I0626 18:46:16.449683  421526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.crt with IP's: []
	I0626 18:46:16.685085  421526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.crt ...
	I0626 18:46:16.685133  421526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.crt: {Name:mkadc7b52685eb68f4e7b386e08ecdf81e075895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:46:16.685334  421526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.key ...
	I0626 18:46:16.685346  421526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.key: {Name:mk560206c5502f107c16c0a8dbd15101c71875ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:46:16.685420  421526 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.key.cee25041
	I0626 18:46:16.685435  421526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0626 18:46:16.789705  421526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.crt.cee25041 ...
	I0626 18:46:16.789741  421526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.crt.cee25041: {Name:mkeff517ce1dc4c9f8e0d51dd3c2b260a85fe5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:46:16.789901  421526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.key.cee25041 ...
	I0626 18:46:16.789913  421526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.key.cee25041: {Name:mk3e28cd517341432097aa5af8f1d9c0961a9e01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:46:16.789985  421526 certs.go:337] copying /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.crt
	I0626 18:46:16.790055  421526 certs.go:341] copying /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.key
	I0626 18:46:16.790103  421526 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/proxy-client.key
	I0626 18:46:16.790117  421526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/proxy-client.crt with IP's: []
	I0626 18:46:17.021333  421526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/proxy-client.crt ...
	I0626 18:46:17.021369  421526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/proxy-client.crt: {Name:mkf3ec5d80739320f1841c552cb428aa0a7964c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:46:17.021537  421526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/proxy-client.key ...
	I0626 18:46:17.021549  421526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/proxy-client.key: {Name:mka7624fcaa0f0649cd5562b23b203e3afd7c9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:46:17.021617  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0626 18:46:17.021635  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0626 18:46:17.021644  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0626 18:46:17.021656  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0626 18:46:17.021668  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0626 18:46:17.021678  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0626 18:46:17.021690  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0626 18:46:17.021703  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0626 18:46:17.021760  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935.pem (1338 bytes)
	W0626 18:46:17.021796  421526 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935_empty.pem, impossibly tiny 0 bytes
	I0626 18:46:17.021808  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 18:46:17.021831  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem (1082 bytes)
	I0626 18:46:17.021852  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem (1123 bytes)
	I0626 18:46:17.021877  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem (1679 bytes)
	I0626 18:46:17.021918  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem (1708 bytes)
	I0626 18:46:17.021942  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935.pem -> /usr/share/ca-certificates/336935.pem
	I0626 18:46:17.021957  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem -> /usr/share/ca-certificates/3369352.pem
	I0626 18:46:17.021972  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:46:17.022532  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0626 18:46:17.045051  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0626 18:46:17.066041  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0626 18:46:17.087582  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0626 18:46:17.109059  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 18:46:17.130418  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 18:46:17.151887  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 18:46:17.173640  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 18:46:17.195181  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935.pem --> /usr/share/ca-certificates/336935.pem (1338 bytes)
	I0626 18:46:17.216936  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem --> /usr/share/ca-certificates/3369352.pem (1708 bytes)
	I0626 18:46:17.238624  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 18:46:17.260716  421526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0626 18:46:17.276909  421526 ssh_runner.go:195] Run: openssl version
	I0626 18:46:17.281957  421526 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0626 18:46:17.282164  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 18:46:17.291206  421526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:46:17.294507  421526 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 26 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:46:17.294552  421526 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:46:17.294591  421526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:46:17.300627  421526 command_runner.go:130] > b5213941
	I0626 18:46:17.300852  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 18:46:17.309323  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/336935.pem && ln -fs /usr/share/ca-certificates/336935.pem /etc/ssl/certs/336935.pem"
	I0626 18:46:17.317772  421526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/336935.pem
	I0626 18:46:17.320799  421526 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 26 18:32 /usr/share/ca-certificates/336935.pem
	I0626 18:46:17.320843  421526 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 18:32 /usr/share/ca-certificates/336935.pem
	I0626 18:46:17.320898  421526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/336935.pem
	I0626 18:46:17.327137  421526 command_runner.go:130] > 51391683
	I0626 18:46:17.327201  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/336935.pem /etc/ssl/certs/51391683.0"
	I0626 18:46:17.335774  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3369352.pem && ln -fs /usr/share/ca-certificates/3369352.pem /etc/ssl/certs/3369352.pem"
	I0626 18:46:17.344701  421526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3369352.pem
	I0626 18:46:17.347862  421526 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 26 18:32 /usr/share/ca-certificates/3369352.pem
	I0626 18:46:17.347893  421526 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 18:32 /usr/share/ca-certificates/3369352.pem
	I0626 18:46:17.347926  421526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3369352.pem
	I0626 18:46:17.354261  421526 command_runner.go:130] > 3ec20f2e
	I0626 18:46:17.354334  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3369352.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 18:46:17.362924  421526 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 18:46:17.365979  421526 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 18:46:17.366012  421526 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 18:46:17.366060  421526 kubeadm.go:404] StartCluster: {Name:multinode-306845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-306845 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:46:17.366162  421526 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0626 18:46:17.366214  421526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0626 18:46:17.401108  421526 cri.go:89] found id: ""
	I0626 18:46:17.401174  421526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0626 18:46:17.408648  421526 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0626 18:46:17.408676  421526 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0626 18:46:17.408686  421526 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0626 18:46:17.409358  421526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0626 18:46:17.417353  421526 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0626 18:46:17.417407  421526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0626 18:46:17.425142  421526 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0626 18:46:17.425166  421526 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0626 18:46:17.425178  421526 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0626 18:46:17.425191  421526 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 18:46:17.425222  421526 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0626 18:46:17.425258  421526 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0626 18:46:17.467316  421526 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0626 18:46:17.467353  421526 command_runner.go:130] > [init] Using Kubernetes version: v1.27.3
	I0626 18:46:17.467461  421526 kubeadm.go:322] [preflight] Running pre-flight checks
	I0626 18:46:17.467477  421526 command_runner.go:130] > [preflight] Running pre-flight checks
	I0626 18:46:17.502797  421526 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0626 18:46:17.502828  421526 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0626 18:46:17.502924  421526 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1036-gcp
	I0626 18:46:17.502947  421526 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1036-gcp
	I0626 18:46:17.502991  421526 kubeadm.go:322] OS: Linux
	I0626 18:46:17.503004  421526 command_runner.go:130] > OS: Linux
	I0626 18:46:17.503065  421526 kubeadm.go:322] CGROUPS_CPU: enabled
	I0626 18:46:17.503073  421526 command_runner.go:130] > CGROUPS_CPU: enabled
	I0626 18:46:17.503114  421526 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0626 18:46:17.503129  421526 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0626 18:46:17.503173  421526 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0626 18:46:17.503183  421526 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0626 18:46:17.503220  421526 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0626 18:46:17.503259  421526 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0626 18:46:17.503340  421526 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0626 18:46:17.503353  421526 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0626 18:46:17.503434  421526 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0626 18:46:17.503453  421526 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0626 18:46:17.503490  421526 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0626 18:46:17.503498  421526 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0626 18:46:17.503559  421526 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0626 18:46:17.503573  421526 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0626 18:46:17.503631  421526 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0626 18:46:17.503642  421526 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0626 18:46:17.565593  421526 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 18:46:17.565625  421526 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0626 18:46:17.565759  421526 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 18:46:17.565790  421526 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0626 18:46:17.565935  421526 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 18:46:17.565952  421526 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0626 18:46:17.758126  421526 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 18:46:17.762628  421526 out.go:204]   - Generating certificates and keys ...
	I0626 18:46:17.758263  421526 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0626 18:46:17.762784  421526 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0626 18:46:17.762815  421526 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0626 18:46:17.762923  421526 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0626 18:46:17.762952  421526 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0626 18:46:18.141702  421526 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0626 18:46:18.141737  421526 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0626 18:46:18.280542  421526 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0626 18:46:18.280589  421526 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0626 18:46:18.502189  421526 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0626 18:46:18.502225  421526 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0626 18:46:18.576344  421526 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0626 18:46:18.576373  421526 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0626 18:46:18.745204  421526 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0626 18:46:18.745250  421526 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0626 18:46:18.745389  421526 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-306845] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0626 18:46:18.745414  421526 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-306845] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0626 18:46:18.941299  421526 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0626 18:46:18.941351  421526 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0626 18:46:18.941532  421526 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-306845] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0626 18:46:18.941544  421526 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-306845] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0626 18:46:19.011456  421526 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0626 18:46:19.011490  421526 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0626 18:46:19.221630  421526 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0626 18:46:19.221690  421526 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0626 18:46:19.373890  421526 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0626 18:46:19.373941  421526 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0626 18:46:19.374042  421526 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 18:46:19.374058  421526 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0626 18:46:19.544842  421526 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 18:46:19.544905  421526 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0626 18:46:19.871865  421526 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 18:46:19.871911  421526 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0626 18:46:19.936765  421526 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 18:46:19.936797  421526 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0626 18:46:20.038037  421526 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 18:46:20.038108  421526 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0626 18:46:20.046723  421526 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 18:46:20.046749  421526 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 18:46:20.047990  421526 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 18:46:20.048032  421526 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 18:46:20.048104  421526 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0626 18:46:20.048116  421526 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0626 18:46:20.120781  421526 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 18:46:20.122963  421526 out.go:204]   - Booting up control plane ...
	I0626 18:46:20.120928  421526 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0626 18:46:20.123095  421526 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 18:46:20.123127  421526 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0626 18:46:20.123293  421526 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 18:46:20.123320  421526 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0626 18:46:20.124506  421526 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 18:46:20.124539  421526 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0626 18:46:20.125463  421526 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 18:46:20.125481  421526 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0626 18:46:20.127505  421526 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 18:46:20.127524  421526 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0626 18:46:25.630225  421526 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502648 seconds
	I0626 18:46:25.630268  421526 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.502648 seconds
	I0626 18:46:25.630402  421526 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 18:46:25.630418  421526 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0626 18:46:25.643766  421526 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 18:46:25.643804  421526 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0626 18:46:26.162965  421526 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0626 18:46:26.163006  421526 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0626 18:46:26.163218  421526 kubeadm.go:322] [mark-control-plane] Marking the node multinode-306845 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 18:46:26.163236  421526 command_runner.go:130] > [mark-control-plane] Marking the node multinode-306845 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0626 18:46:26.674251  421526 kubeadm.go:322] [bootstrap-token] Using token: g0qusl.58ztgur0jchlz3up
	I0626 18:46:26.675834  421526 out.go:204]   - Configuring RBAC rules ...
	I0626 18:46:26.674337  421526 command_runner.go:130] > [bootstrap-token] Using token: g0qusl.58ztgur0jchlz3up
	I0626 18:46:26.675977  421526 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 18:46:26.675992  421526 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0626 18:46:26.680334  421526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 18:46:26.680358  421526 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0626 18:46:26.686912  421526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 18:46:26.686932  421526 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0626 18:46:26.690803  421526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 18:46:26.690810  421526 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0626 18:46:26.693487  421526 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 18:46:26.693524  421526 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0626 18:46:26.696736  421526 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 18:46:26.696755  421526 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0626 18:46:26.707530  421526 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 18:46:26.707552  421526 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0626 18:46:26.915287  421526 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0626 18:46:26.915317  421526 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0626 18:46:27.097407  421526 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0626 18:46:27.097477  421526 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0626 18:46:27.098532  421526 kubeadm.go:322] 
	I0626 18:46:27.098619  421526 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0626 18:46:27.098632  421526 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0626 18:46:27.098638  421526 kubeadm.go:322] 
	I0626 18:46:27.098732  421526 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0626 18:46:27.098739  421526 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0626 18:46:27.098744  421526 kubeadm.go:322] 
	I0626 18:46:27.098776  421526 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0626 18:46:27.098782  421526 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0626 18:46:27.098848  421526 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 18:46:27.098854  421526 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0626 18:46:27.098915  421526 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 18:46:27.098921  421526 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0626 18:46:27.098926  421526 kubeadm.go:322] 
	I0626 18:46:27.098986  421526 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0626 18:46:27.098990  421526 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0626 18:46:27.098993  421526 kubeadm.go:322] 
	I0626 18:46:27.099030  421526 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 18:46:27.099034  421526 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0626 18:46:27.099038  421526 kubeadm.go:322] 
	I0626 18:46:27.099092  421526 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0626 18:46:27.099099  421526 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0626 18:46:27.099176  421526 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 18:46:27.099182  421526 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0626 18:46:27.099254  421526 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 18:46:27.099262  421526 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0626 18:46:27.099266  421526 kubeadm.go:322] 
	I0626 18:46:27.099356  421526 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0626 18:46:27.099360  421526 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0626 18:46:27.099423  421526 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0626 18:46:27.099427  421526 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0626 18:46:27.099430  421526 kubeadm.go:322] 
	I0626 18:46:27.099495  421526 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g0qusl.58ztgur0jchlz3up \
	I0626 18:46:27.099498  421526 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token g0qusl.58ztgur0jchlz3up \
	I0626 18:46:27.099578  421526 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:de006eb5b127e50d4fc17a3a52624114d6dd8c90abcc2a4dd7bcc578abe0baac \
	I0626 18:46:27.099581  421526 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:de006eb5b127e50d4fc17a3a52624114d6dd8c90abcc2a4dd7bcc578abe0baac \
	I0626 18:46:27.099596  421526 kubeadm.go:322] 	--control-plane 
	I0626 18:46:27.099600  421526 command_runner.go:130] > 	--control-plane 
	I0626 18:46:27.099602  421526 kubeadm.go:322] 
	I0626 18:46:27.099674  421526 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0626 18:46:27.099678  421526 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0626 18:46:27.099684  421526 kubeadm.go:322] 
	I0626 18:46:27.099747  421526 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g0qusl.58ztgur0jchlz3up \
	I0626 18:46:27.099751  421526 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token g0qusl.58ztgur0jchlz3up \
	I0626 18:46:27.099830  421526 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:de006eb5b127e50d4fc17a3a52624114d6dd8c90abcc2a4dd7bcc578abe0baac 
	I0626 18:46:27.099835  421526 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:de006eb5b127e50d4fc17a3a52624114d6dd8c90abcc2a4dd7bcc578abe0baac 
	I0626 18:46:27.102803  421526 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-gcp\n", err: exit status 1
	I0626 18:46:27.102831  421526 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-gcp\n", err: exit status 1
	I0626 18:46:27.103009  421526 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 18:46:27.103031  421526 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 18:46:27.103054  421526 cni.go:84] Creating CNI manager for ""
	I0626 18:46:27.103101  421526 cni.go:137] 1 nodes found, recommending kindnet
	I0626 18:46:27.104883  421526 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0626 18:46:27.106191  421526 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0626 18:46:27.109978  421526 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0626 18:46:27.110006  421526 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0626 18:46:27.110017  421526 command_runner.go:130] > Device: 37h/55d	Inode: 2348905     Links: 1
	I0626 18:46:27.110028  421526 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 18:46:27.110038  421526 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0626 18:46:27.110050  421526 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0626 18:46:27.110062  421526 command_runner.go:130] > Change: 2023-06-26 18:26:22.579705851 +0000
	I0626 18:46:27.110071  421526 command_runner.go:130] >  Birth: 2023-06-26 18:26:22.555703512 +0000
	I0626 18:46:27.110131  421526 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0626 18:46:27.110144  421526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0626 18:46:27.128345  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0626 18:46:27.785710  421526 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0626 18:46:27.792781  421526 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0626 18:46:27.799832  421526 command_runner.go:130] > serviceaccount/kindnet created
	I0626 18:46:27.809244  421526 command_runner.go:130] > daemonset.apps/kindnet created
	I0626 18:46:27.813101  421526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0626 18:46:27.813174  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:27.813212  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1 minikube.k8s.io/name=multinode-306845 minikube.k8s.io/updated_at=2023_06_26T18_46_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:27.899437  421526 command_runner.go:130] > -16
	I0626 18:46:27.899498  421526 ops.go:34] apiserver oom_adj: -16
	I0626 18:46:27.899520  421526 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0626 18:46:27.899626  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:27.902150  421526 command_runner.go:130] > node/multinode-306845 labeled
	I0626 18:46:27.965912  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:28.466748  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:28.529714  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:28.966257  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:29.027579  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:29.466912  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:29.530999  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:29.966889  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:30.029302  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:30.466379  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:30.527025  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:30.966408  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:31.031815  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:31.466421  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:31.530295  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:31.966955  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:32.030310  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:32.467002  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:32.527814  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:32.966879  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:33.029845  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:33.466429  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:33.528477  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:33.966953  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:34.026958  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:34.466919  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:34.528617  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:34.966829  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:35.034791  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:35.466331  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:35.530714  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:35.967000  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:36.031786  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:36.466320  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:36.529562  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:36.966157  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:37.033844  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:37.466436  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:37.528950  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:37.966661  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:38.032462  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:38.466092  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:38.528578  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:38.966943  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:39.031243  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:39.466786  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:39.531484  421526 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0626 18:46:39.966156  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0626 18:46:40.032061  421526 command_runner.go:130] > NAME      SECRETS   AGE
	I0626 18:46:40.032089  421526 command_runner.go:130] > default   0         1s
	I0626 18:46:40.032120  421526 kubeadm.go:1081] duration metric: took 12.219011567s to wait for elevateKubeSystemPrivileges.
	I0626 18:46:40.032143  421526 kubeadm.go:406] StartCluster complete in 22.666088171s
	I0626 18:46:40.032168  421526 settings.go:142] acquiring lock: {Name:mkb5ecb1b3f16a0c9ac49740714c898cb701a346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:46:40.032251  421526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:46:40.033146  421526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/kubeconfig: {Name:mk4c2529327c78ca1f9c9f9cbf169818d7b9a7d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:46:40.033412  421526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0626 18:46:40.033558  421526 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0626 18:46:40.033683  421526 addons.go:66] Setting storage-provisioner=true in profile "multinode-306845"
	I0626 18:46:40.033701  421526 addons.go:66] Setting default-storageclass=true in profile "multinode-306845"
	I0626 18:46:40.033764  421526 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:46:40.033781  421526 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-306845"
	I0626 18:46:40.033706  421526 addons.go:228] Setting addon storage-provisioner=true in "multinode-306845"
	I0626 18:46:40.033713  421526 config.go:182] Loaded profile config "multinode-306845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:46:40.033895  421526 host.go:66] Checking if "multinode-306845" exists ...
	I0626 18:46:40.034069  421526 kapi.go:59] client config for multinode-306845: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.key", CAFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 18:46:40.034178  421526 cli_runner.go:164] Run: docker container inspect multinode-306845 --format={{.State.Status}}
	I0626 18:46:40.034407  421526 cli_runner.go:164] Run: docker container inspect multinode-306845 --format={{.State.Status}}
	I0626 18:46:40.034853  421526 cert_rotation.go:137] Starting client certificate rotation controller
	I0626 18:46:40.035055  421526 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0626 18:46:40.035070  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:40.035078  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:40.035087  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:40.045970  421526 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0626 18:46:40.045999  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:40.046010  421526 round_trippers.go:580]     Audit-Id: 5488a635-5f2b-4c61-9ce8-f3b900d77364
	I0626 18:46:40.046019  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:40.046028  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:40.046036  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:40.046045  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:40.046053  421526 round_trippers.go:580]     Content-Length: 291
	I0626 18:46:40.046067  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:40 GMT
	I0626 18:46:40.046108  421526 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6089fab5-ac67-4428-9fcc-92ab5c9f4130","resourceVersion":"261","creationTimestamp":"2023-06-26T18:46:26Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0626 18:46:40.046645  421526 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6089fab5-ac67-4428-9fcc-92ab5c9f4130","resourceVersion":"261","creationTimestamp":"2023-06-26T18:46:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0626 18:46:40.046708  421526 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0626 18:46:40.046720  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:40.046730  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:40.046745  421526 round_trippers.go:473]     Content-Type: application/json
	I0626 18:46:40.046761  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:40.053093  421526 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0626 18:46:40.053122  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:40.053133  421526 round_trippers.go:580]     Audit-Id: 629597c6-30f3-422f-a1e6-067cad0c3689
	I0626 18:46:40.053143  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:40.053152  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:40.053169  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:40.053178  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:40.053187  421526 round_trippers.go:580]     Content-Length: 291
	I0626 18:46:40.053199  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:40 GMT
	I0626 18:46:40.053228  421526 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6089fab5-ac67-4428-9fcc-92ab5c9f4130","resourceVersion":"341","creationTimestamp":"2023-06-26T18:46:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0626 18:46:40.054953  421526 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:46:40.056849  421526 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0626 18:46:40.055326  421526 kapi.go:59] client config for multinode-306845: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.key", CAFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 18:46:40.058472  421526 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 18:46:40.058490  421526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0626 18:46:40.058553  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:46:40.058624  421526 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0626 18:46:40.058632  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:40.058639  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:40.058645  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:40.061015  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:40.061036  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:40.061046  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:40.061055  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:40.061073  421526 round_trippers.go:580]     Content-Length: 109
	I0626 18:46:40.061086  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:40 GMT
	I0626 18:46:40.061099  421526 round_trippers.go:580]     Audit-Id: af5d9f74-90b5-48f8-9ac5-6d54d4a3c9b6
	I0626 18:46:40.061112  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:40.061125  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:40.061168  421526 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"341"},"items":[]}
	I0626 18:46:40.061447  421526 addons.go:228] Setting addon default-storageclass=true in "multinode-306845"
	I0626 18:46:40.061489  421526 host.go:66] Checking if "multinode-306845" exists ...
	I0626 18:46:40.061842  421526 cli_runner.go:164] Run: docker container inspect multinode-306845 --format={{.State.Status}}
	I0626 18:46:40.079360  421526 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0626 18:46:40.079382  421526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0626 18:46:40.079427  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:46:40.080758  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa Username:docker}
	I0626 18:46:40.102847  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa Username:docker}
	I0626 18:46:40.114875  421526 command_runner.go:130] > apiVersion: v1
	I0626 18:46:40.114897  421526 command_runner.go:130] > data:
	I0626 18:46:40.114904  421526 command_runner.go:130] >   Corefile: |
	I0626 18:46:40.114909  421526 command_runner.go:130] >     .:53 {
	I0626 18:46:40.114915  421526 command_runner.go:130] >         errors
	I0626 18:46:40.114923  421526 command_runner.go:130] >         health {
	I0626 18:46:40.114930  421526 command_runner.go:130] >            lameduck 5s
	I0626 18:46:40.114939  421526 command_runner.go:130] >         }
	I0626 18:46:40.114945  421526 command_runner.go:130] >         ready
	I0626 18:46:40.114958  421526 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0626 18:46:40.114968  421526 command_runner.go:130] >            pods insecure
	I0626 18:46:40.114979  421526 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0626 18:46:40.114989  421526 command_runner.go:130] >            ttl 30
	I0626 18:46:40.114998  421526 command_runner.go:130] >         }
	I0626 18:46:40.115012  421526 command_runner.go:130] >         prometheus :9153
	I0626 18:46:40.115022  421526 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0626 18:46:40.115032  421526 command_runner.go:130] >            max_concurrent 1000
	I0626 18:46:40.115055  421526 command_runner.go:130] >         }
	I0626 18:46:40.115064  421526 command_runner.go:130] >         cache 30
	I0626 18:46:40.115073  421526 command_runner.go:130] >         loop
	I0626 18:46:40.115083  421526 command_runner.go:130] >         reload
	I0626 18:46:40.115092  421526 command_runner.go:130] >         loadbalance
	I0626 18:46:40.115100  421526 command_runner.go:130] >     }
	I0626 18:46:40.115110  421526 command_runner.go:130] > kind: ConfigMap
	I0626 18:46:40.115118  421526 command_runner.go:130] > metadata:
	I0626 18:46:40.115128  421526 command_runner.go:130] >   creationTimestamp: "2023-06-26T18:46:26Z"
	I0626 18:46:40.115137  421526 command_runner.go:130] >   name: coredns
	I0626 18:46:40.115147  421526 command_runner.go:130] >   namespace: kube-system
	I0626 18:46:40.115156  421526 command_runner.go:130] >   resourceVersion: "257"
	I0626 18:46:40.115167  421526 command_runner.go:130] >   uid: 3ffcf27a-f210-4e69-b12f-eebe03d2fafb
	I0626 18:46:40.117450  421526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0626 18:46:40.209893  421526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0626 18:46:40.210175  421526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0626 18:46:40.553605  421526 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0626 18:46:40.553638  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:40.553646  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:40.553653  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:40.606451  421526 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0626 18:46:40.606487  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:40.606498  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:40.606507  421526 round_trippers.go:580]     Content-Length: 291
	I0626 18:46:40.606515  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:40 GMT
	I0626 18:46:40.606537  421526 round_trippers.go:580]     Audit-Id: b9ca3b0e-aee6-45c4-b9d9-31cc8035f8ad
	I0626 18:46:40.606551  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:40.606568  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:40.606577  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:40.606614  421526 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6089fab5-ac67-4428-9fcc-92ab5c9f4130","resourceVersion":"341","creationTimestamp":"2023-06-26T18:46:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0626 18:46:40.606759  421526 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-306845" context rescaled to 1 replicas
	I0626 18:46:40.606803  421526 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0626 18:46:40.608792  421526 out.go:177] * Verifying Kubernetes components...
	I0626 18:46:40.610198  421526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 18:46:40.896942  421526 command_runner.go:130] > configmap/coredns replaced
	I0626 18:46:40.899783  421526 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0626 18:46:41.226961  421526 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0626 18:46:41.233438  421526 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0626 18:46:41.242830  421526 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0626 18:46:41.250157  421526 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0626 18:46:41.298276  421526 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0626 18:46:41.309528  421526 command_runner.go:130] > pod/storage-provisioner created
	I0626 18:46:41.314795  421526 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.104851303s)
	I0626 18:46:41.314858  421526 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0626 18:46:41.314900  421526 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.104700041s)
	I0626 18:46:41.316403  421526 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0626 18:46:41.315406  421526 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:46:41.317742  421526 addons.go:499] enable addons completed in 1.284179873s: enabled=[storage-provisioner default-storageclass]
	I0626 18:46:41.318078  421526 kapi.go:59] client config for multinode-306845: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.key", CAFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 18:46:41.318448  421526 node_ready.go:35] waiting up to 6m0s for node "multinode-306845" to be "Ready" ...
	I0626 18:46:41.318559  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:41.318570  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:41.318583  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:41.318603  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:41.320952  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:41.320974  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:41.320985  421526 round_trippers.go:580]     Audit-Id: 94a8b0b2-56f0-4f3c-968c-89503d7ad40c
	I0626 18:46:41.320994  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:41.321002  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:41.321043  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:41.321057  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:41.321071  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:41 GMT
	I0626 18:46:41.321210  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"355","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0626 18:46:41.822703  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:41.822735  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:41.822746  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:41.822753  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:41.825231  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:41.825254  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:41.825262  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:41.825268  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:41.825274  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:41 GMT
	I0626 18:46:41.825279  421526 round_trippers.go:580]     Audit-Id: 71af57ae-d559-40ef-a80a-4f684e3d3446
	I0626 18:46:41.825285  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:41.825290  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:41.825390  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"355","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0626 18:46:42.323001  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:42.323021  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:42.323030  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:42.323036  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:42.325457  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:42.325475  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:42.325482  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:42.325488  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:42.325494  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:42 GMT
	I0626 18:46:42.325501  421526 round_trippers.go:580]     Audit-Id: a8a7b7c5-4917-497f-af26-6de0e81dedec
	I0626 18:46:42.325507  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:42.325512  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:42.325597  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:42.325937  421526 node_ready.go:49] node "multinode-306845" has status "Ready":"True"
	I0626 18:46:42.325953  421526 node_ready.go:38] duration metric: took 1.007485416s waiting for node "multinode-306845" to be "Ready" ...
	I0626 18:46:42.325962  421526 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 18:46:42.326024  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0626 18:46:42.326033  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:42.326040  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:42.326046  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:42.328910  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:42.328928  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:42.328938  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:42.328948  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:42.328956  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:42.328969  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:42.328979  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:42 GMT
	I0626 18:46:42.328991  421526 round_trippers.go:580]     Audit-Id: 0e1e37a4-b9d1-4842-b45f-537730fb7acf
	I0626 18:46:42.329485  421526 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"419","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54151 chars]
	I0626 18:46:42.332384  421526 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-d67vq" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:42.332447  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:42.332456  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:42.332468  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:42.332477  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:42.334403  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:42.334419  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:42.334426  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:42.334432  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:42.334437  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:42.334446  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:42.334455  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:42 GMT
	I0626 18:46:42.334474  421526 round_trippers.go:580]     Audit-Id: f704274f-e066-4054-839f-ae30a04461b8
	I0626 18:46:42.334585  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"419","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0626 18:46:42.334970  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:42.334982  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:42.334989  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:42.334996  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:42.336735  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:42.336750  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:42.336757  421526 round_trippers.go:580]     Audit-Id: e4df6d48-7313-4cfd-8ff9-921f1500b1e1
	I0626 18:46:42.336763  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:42.336768  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:42.336776  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:42.336784  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:42.336793  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:42 GMT
	I0626 18:46:42.336932  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:42.838020  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:42.838040  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:42.838049  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:42.838055  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:42.840514  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:42.840537  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:42.840549  421526 round_trippers.go:580]     Audit-Id: 6bc4d69f-45e5-49f5-8027-ba14d64647e1
	I0626 18:46:42.840558  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:42.840568  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:42.840577  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:42.840585  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:42.840592  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:42 GMT
	I0626 18:46:42.840712  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"419","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0626 18:46:42.841329  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:42.841345  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:42.841356  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:42.841364  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:42.843461  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:42.843479  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:42.843488  421526 round_trippers.go:580]     Audit-Id: 9cfb64a9-4c7a-4169-bf39-a625dbea5d11
	I0626 18:46:42.843494  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:42.843499  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:42.843504  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:42.843510  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:42.843516  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:42 GMT
	I0626 18:46:42.843705  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:43.338142  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:43.338166  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:43.338177  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:43.338190  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:43.340632  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:43.340653  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:43.340660  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:43.340666  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:43 GMT
	I0626 18:46:43.340674  421526 round_trippers.go:580]     Audit-Id: 00b3f8c6-123b-48b3-99b2-4434c4e250a3
	I0626 18:46:43.340682  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:43.340694  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:43.340703  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:43.340829  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:43.341434  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:43.341453  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:43.341462  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:43.341470  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:43.343490  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:43.343512  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:43.343524  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:43.343532  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:43.343540  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:43.343549  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:43 GMT
	I0626 18:46:43.343563  421526 round_trippers.go:580]     Audit-Id: a950c4f0-ac9b-46da-9669-fcf6c2b2a328
	I0626 18:46:43.343574  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:43.343698  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:43.838120  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:43.838142  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:43.838150  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:43.838156  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:43.840650  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:43.840675  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:43.840686  421526 round_trippers.go:580]     Audit-Id: 9fa9c90a-e4b0-44f4-93bf-87267a60c8e2
	I0626 18:46:43.840695  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:43.840703  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:43.840712  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:43.840726  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:43.840733  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:43 GMT
	I0626 18:46:43.840892  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:43.841355  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:43.841366  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:43.841374  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:43.841380  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:43.843387  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:43.843411  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:43.843423  421526 round_trippers.go:580]     Audit-Id: ad9f73b9-4a52-42c3-89aa-57eafa8ceafe
	I0626 18:46:43.843440  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:43.843454  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:43.843473  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:43.843483  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:43.843497  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:43 GMT
	I0626 18:46:43.843589  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:44.338218  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:44.338238  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:44.338247  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:44.338253  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:44.340827  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:44.340848  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:44.340855  421526 round_trippers.go:580]     Audit-Id: 32e84159-c9f8-4fb3-88be-db047da161c1
	I0626 18:46:44.340878  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:44.340887  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:44.340896  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:44.340904  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:44.340911  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:44 GMT
	I0626 18:46:44.341108  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:44.341678  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:44.341694  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:44.341701  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:44.341708  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:44.343725  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:44.343747  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:44.343758  421526 round_trippers.go:580]     Audit-Id: 88e66f59-7f66-4c3b-b4e3-c9211f7d71ee
	I0626 18:46:44.343766  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:44.343774  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:44.343781  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:44.343794  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:44.343803  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:44 GMT
	I0626 18:46:44.343926  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:44.344261  421526 pod_ready.go:102] pod "coredns-5d78c9869d-d67vq" in "kube-system" namespace has status "Ready":"False"
	I0626 18:46:44.838050  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:44.838081  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:44.838092  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:44.838102  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:44.841108  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:44.841134  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:44.841142  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:44.841148  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:44.841153  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:44.841161  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:44.841171  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:44 GMT
	I0626 18:46:44.841180  421526 round_trippers.go:580]     Audit-Id: 5fdec76f-5af8-4143-bb36-e5f4794cf48d
	I0626 18:46:44.841455  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:44.842136  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:44.842159  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:44.842171  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:44.842181  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:44.844161  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:44.844182  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:44.844191  421526 round_trippers.go:580]     Audit-Id: 7d06a46d-ede0-4d80-9ac7-3f736db4009e
	I0626 18:46:44.844201  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:44.844209  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:44.844217  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:44.844225  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:44.844235  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:44 GMT
	I0626 18:46:44.844389  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:45.338141  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:45.338164  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:45.338173  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:45.338179  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:45.340579  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:45.340599  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:45.340606  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:45.340614  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:45.340623  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:45.340633  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:45 GMT
	I0626 18:46:45.340641  421526 round_trippers.go:580]     Audit-Id: 4f532a27-7c6c-4773-a877-a10c4f746ef0
	I0626 18:46:45.340653  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:45.340805  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:45.341292  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:45.341308  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:45.341316  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:45.341322  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:45.343254  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:45.343274  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:45.343281  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:45.343289  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:45 GMT
	I0626 18:46:45.343297  421526 round_trippers.go:580]     Audit-Id: 70221689-d727-4992-a054-ac2655258e74
	I0626 18:46:45.343305  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:45.343312  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:45.343321  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:45.343422  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:45.838115  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:45.838137  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:45.838146  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:45.838153  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:45.840694  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:45.840719  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:45.840730  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:45.840740  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:45 GMT
	I0626 18:46:45.840748  421526 round_trippers.go:580]     Audit-Id: 6ecc5992-bd63-4a61-aabc-86fa84600326
	I0626 18:46:45.840776  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:45.840784  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:45.840797  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:45.840951  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:45.841401  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:45.841415  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:45.841422  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:45.841428  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:45.843385  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:45.843406  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:45.843415  421526 round_trippers.go:580]     Audit-Id: 40210b4c-f616-4156-bc35-f9e362e5d169
	I0626 18:46:45.843423  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:45.843431  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:45.843440  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:45.843448  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:45.843459  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:45 GMT
	I0626 18:46:45.843620  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:46.338140  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:46.338168  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:46.338180  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:46.338188  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:46.340730  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:46.340750  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:46.340758  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:46 GMT
	I0626 18:46:46.340764  421526 round_trippers.go:580]     Audit-Id: 90d3f9d7-ecc4-4abb-bdb1-1317b6846531
	I0626 18:46:46.340769  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:46.340774  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:46.340779  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:46.340784  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:46.341017  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:46.341473  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:46.341485  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:46.341493  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:46.341508  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:46.343508  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:46.343531  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:46.343542  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:46 GMT
	I0626 18:46:46.343550  421526 round_trippers.go:580]     Audit-Id: c747b39c-1d82-4ce0-a199-9c46d87bc387
	I0626 18:46:46.343558  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:46.343566  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:46.343579  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:46.343591  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:46.343691  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:46.838129  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:46.838151  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:46.838160  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:46.838166  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:46.842246  421526 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0626 18:46:46.842267  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:46.842274  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:46 GMT
	I0626 18:46:46.842280  421526 round_trippers.go:580]     Audit-Id: 17d4c5ea-5e12-4824-8001-215eabcdfa5d
	I0626 18:46:46.842288  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:46.842293  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:46.842299  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:46.842304  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:46.842474  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:46.843066  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:46.843083  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:46.843095  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:46.843104  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:46.844953  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:46.844973  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:46.844983  421526 round_trippers.go:580]     Audit-Id: 69ae1a66-452b-422d-adbd-465cb1dd4c97
	I0626 18:46:46.844992  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:46.845001  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:46.845029  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:46.845039  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:46.845045  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:46 GMT
	I0626 18:46:46.845164  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:46.845546  421526 pod_ready.go:102] pod "coredns-5d78c9869d-d67vq" in "kube-system" namespace has status "Ready":"False"
	I0626 18:46:47.337817  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:47.337838  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:47.337847  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:47.337854  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:47.340149  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:47.340177  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:47.340188  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:47.340198  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:47.340211  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:47.340220  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:47 GMT
	I0626 18:46:47.340233  421526 round_trippers.go:580]     Audit-Id: 8d2e646c-1efe-4b87-b605-ac597764b811
	I0626 18:46:47.340242  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:47.340383  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:47.341004  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:47.341021  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:47.341030  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:47.341038  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:47.343064  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:47.343080  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:47.343087  421526 round_trippers.go:580]     Audit-Id: 4d275e57-4df3-4174-95b8-b23f6f249a15
	I0626 18:46:47.343092  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:47.343098  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:47.343103  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:47.343111  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:47.343120  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:47 GMT
	I0626 18:46:47.343234  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:47.837885  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:47.837906  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:47.837915  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:47.837921  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:47.840399  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:47.840419  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:47.840426  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:47.840432  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:47.840437  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:47 GMT
	I0626 18:46:47.840443  421526 round_trippers.go:580]     Audit-Id: 6bb1cc2d-ed50-4af2-88cc-2f41dc0d7d85
	I0626 18:46:47.840448  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:47.840453  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:47.840600  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:47.841108  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:47.841123  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:47.841133  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:47.841142  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:47.843088  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:47.843106  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:47.843112  421526 round_trippers.go:580]     Audit-Id: f9524504-fa5b-41ac-858d-d456c4464af4
	I0626 18:46:47.843119  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:47.843127  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:47.843135  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:47.843146  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:47.843158  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:47 GMT
	I0626 18:46:47.843233  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:48.337849  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:48.337873  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:48.337881  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:48.337887  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:48.340315  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:48.340342  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:48.340355  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:48.340365  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:48.340378  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:48.340385  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:48 GMT
	I0626 18:46:48.340394  421526 round_trippers.go:580]     Audit-Id: 1a3eb907-54a2-48fd-ba83-57c1286751a4
	I0626 18:46:48.340399  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:48.340505  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:48.340993  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:48.341007  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:48.341014  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:48.341020  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:48.342997  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:48.343015  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:48.343022  421526 round_trippers.go:580]     Audit-Id: c6da2b39-5f39-4699-85fc-39c97687af2e
	I0626 18:46:48.343028  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:48.343033  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:48.343039  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:48.343044  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:48.343049  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:48 GMT
	I0626 18:46:48.343137  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:48.837700  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:48.837724  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:48.837735  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:48.837743  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:48.840335  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:48.840359  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:48.840367  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:48.840373  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:48.840378  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:48 GMT
	I0626 18:46:48.840383  421526 round_trippers.go:580]     Audit-Id: 23d09e03-2fc3-41a6-bb64-abfd03c5dc11
	I0626 18:46:48.840389  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:48.840394  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:48.840510  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:48.841005  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:48.841020  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:48.841027  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:48.841034  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:48.843034  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:48.843071  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:48.843081  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:48.843090  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:48 GMT
	I0626 18:46:48.843099  421526 round_trippers.go:580]     Audit-Id: 5f09cee6-c595-43c0-b119-c023482c1860
	I0626 18:46:48.843110  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:48.843122  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:48.843130  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:48.843202  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:49.337541  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:49.337563  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:49.337572  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:49.337584  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:49.340105  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:49.340132  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:49.340143  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:49.340152  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:49 GMT
	I0626 18:46:49.340160  421526 round_trippers.go:580]     Audit-Id: b9a8c5e6-d3a3-4fea-9031-f8b1a6075e06
	I0626 18:46:49.340168  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:49.340177  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:49.340186  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:49.340321  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:49.340834  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:49.340847  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:49.340855  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:49.340885  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:49.342967  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:49.342989  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:49.342999  421526 round_trippers.go:580]     Audit-Id: f093941d-be02-4ed0-bbb0-99d0c58cce93
	I0626 18:46:49.343008  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:49.343017  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:49.343027  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:49.343036  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:49.343050  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:49 GMT
	I0626 18:46:49.343192  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:49.343608  421526 pod_ready.go:102] pod "coredns-5d78c9869d-d67vq" in "kube-system" namespace has status "Ready":"False"
	I0626 18:46:49.837988  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:49.838017  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:49.838029  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:49.838039  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:49.840837  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:49.840879  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:49.840890  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:49.840900  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:49.840909  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:49.840920  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:49.840932  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:49 GMT
	I0626 18:46:49.840944  421526 round_trippers.go:580]     Audit-Id: 756de563-0aca-47b7-be93-ca646c7f8bf0
	I0626 18:46:49.841052  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:49.841570  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:49.841583  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:49.841590  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:49.841596  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:49.843691  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:49.843713  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:49.843723  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:49.843731  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:49.843742  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:49.843752  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:49 GMT
	I0626 18:46:49.843770  421526 round_trippers.go:580]     Audit-Id: a9da385b-a5d5-4b1d-bd99-72584688776a
	I0626 18:46:49.843782  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:49.843889  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:50.337722  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:50.337747  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:50.337760  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:50.337770  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:50.340321  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:50.340342  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:50.340349  421526 round_trippers.go:580]     Audit-Id: e526b49f-1998-4b7e-ae2f-e0bf7f8caf44
	I0626 18:46:50.340355  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:50.340360  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:50.340366  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:50.340371  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:50.340378  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:50 GMT
	I0626 18:46:50.340483  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:50.340974  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:50.340988  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:50.340998  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:50.341006  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:50.342903  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:50.342923  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:50.342932  421526 round_trippers.go:580]     Audit-Id: f3e5dfe9-e7dc-4318-b724-f0e0bf75208d
	I0626 18:46:50.342949  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:50.342959  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:50.342972  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:50.342985  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:50.342998  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:50 GMT
	I0626 18:46:50.343096  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:50.837584  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:50.837607  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:50.837615  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:50.837627  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:50.840277  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:50.840307  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:50.840315  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:50 GMT
	I0626 18:46:50.840321  421526 round_trippers.go:580]     Audit-Id: 4d364ea6-a181-4e83-a728-df39d905ae08
	I0626 18:46:50.840326  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:50.840332  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:50.840337  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:50.840343  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:50.840457  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:50.840929  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:50.840943  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:50.840950  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:50.840956  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:50.842872  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:50.842894  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:50.842904  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:50.842914  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:50 GMT
	I0626 18:46:50.842923  421526 round_trippers.go:580]     Audit-Id: d780f31e-a3c8-4ec8-a0d7-140b2fcb9ffa
	I0626 18:46:50.842933  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:50.842952  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:50.842960  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:50.843051  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:51.337666  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:51.337691  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:51.337699  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:51.337705  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:51.340274  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:51.340299  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:51.340308  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:51 GMT
	I0626 18:46:51.340316  421526 round_trippers.go:580]     Audit-Id: 00187f06-68de-4285-acb6-5150d7b6fb0f
	I0626 18:46:51.340326  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:51.340336  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:51.340349  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:51.340361  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:51.340497  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:51.341009  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:51.341024  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:51.341032  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:51.341038  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:51.343025  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:51.343048  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:51.343061  421526 round_trippers.go:580]     Audit-Id: cf7e1e0e-558d-4162-9699-7029a6d22a4a
	I0626 18:46:51.343072  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:51.343080  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:51.343089  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:51.343099  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:51.343112  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:51 GMT
	I0626 18:46:51.343228  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:51.343634  421526 pod_ready.go:102] pod "coredns-5d78c9869d-d67vq" in "kube-system" namespace has status "Ready":"False"
	I0626 18:46:51.837864  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:51.837891  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:51.837902  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:51.837909  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:51.840726  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:51.840751  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:51.840763  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:51.840773  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:51.840782  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:51.840790  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:51 GMT
	I0626 18:46:51.840798  421526 round_trippers.go:580]     Audit-Id: 70e0d6e3-e543-4cd2-afa3-b2be4152646b
	I0626 18:46:51.840810  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:51.840948  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:51.841524  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:51.841541  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:51.841552  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:51.841562  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:51.843414  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:51.843437  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:51.843449  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:51.843459  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:51 GMT
	I0626 18:46:51.843466  421526 round_trippers.go:580]     Audit-Id: a6a0ffc7-e303-46b8-a74f-4bd6c972fa98
	I0626 18:46:51.843473  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:51.843479  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:51.843487  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:51.843683  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:52.338099  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:52.338120  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.338129  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.338135  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.340526  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:52.340547  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.340554  421526 round_trippers.go:580]     Audit-Id: 6e5097b4-b6d2-485a-8529-4c923757a221
	I0626 18:46:52.340570  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.340579  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.340587  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.340595  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.340604  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.340711  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"426","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0626 18:46:52.341181  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:52.341193  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.341203  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.341209  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.343292  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:52.343314  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.343325  421526 round_trippers.go:580]     Audit-Id: 1c735c3e-3c65-4f7e-aa3f-0c5a968186f1
	I0626 18:46:52.343335  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.343341  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.343349  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.343357  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.343366  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.343454  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:52.837546  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:46:52.837572  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.837585  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.837595  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.840624  421526 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 18:46:52.840651  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.840659  421526 round_trippers.go:580]     Audit-Id: 468a66e9-a9ec-4839-9ba0-88d97d751e27
	I0626 18:46:52.840666  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.840674  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.840682  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.840693  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.840706  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.840883  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"442","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0626 18:46:52.841379  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:52.841396  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.841407  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.841419  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.843581  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:52.843604  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.843613  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.843623  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.843633  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.843645  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.843660  421526 round_trippers.go:580]     Audit-Id: 357850b0-8ec3-40f2-a099-53ee2a1ddce3
	I0626 18:46:52.843674  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.843792  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:52.844113  421526 pod_ready.go:92] pod "coredns-5d78c9869d-d67vq" in "kube-system" namespace has status "Ready":"True"
	I0626 18:46:52.844131  421526 pod_ready.go:81] duration metric: took 10.511725478s waiting for pod "coredns-5d78c9869d-d67vq" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:52.844143  421526 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:52.844222  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-306845
	I0626 18:46:52.844230  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.844237  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.844243  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.845957  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:52.845974  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.845982  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.845988  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.845993  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.846000  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.846008  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.846014  421526 round_trippers.go:580]     Audit-Id: 5c380aaf-40c6-455c-949c-40cd1c5dd161
	I0626 18:46:52.846118  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-306845","namespace":"kube-system","uid":"8e600dee-f767-4680-b831-a3ff0dba8338","resourceVersion":"296","creationTimestamp":"2023-06-26T18:46:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"08749df88deb7f3823978e88f0a29b74","kubernetes.io/config.mirror":"08749df88deb7f3823978e88f0a29b74","kubernetes.io/config.seen":"2023-06-26T18:46:26.996335605Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0626 18:46:52.846460  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:52.846474  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.846482  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.846488  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.848403  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:52.848422  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.848431  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.848439  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.848447  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.848455  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.848470  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.848483  421526 round_trippers.go:580]     Audit-Id: 2e6b4272-88d2-4166-9fdc-456c6d0df5d8
	I0626 18:46:52.848583  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:52.848886  421526 pod_ready.go:92] pod "etcd-multinode-306845" in "kube-system" namespace has status "Ready":"True"
	I0626 18:46:52.848905  421526 pod_ready.go:81] duration metric: took 4.749419ms waiting for pod "etcd-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:52.848916  421526 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:52.848960  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-306845
	I0626 18:46:52.848968  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.848974  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.848981  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.850727  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:52.850747  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.850757  421526 round_trippers.go:580]     Audit-Id: 15f53333-d702-4eda-9355-180ea3b5c9c4
	I0626 18:46:52.850766  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.850775  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.850785  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.850791  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.850799  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.850928  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-306845","namespace":"kube-system","uid":"0cbdbfe4-b817-467c-ab7e-361e8faf4005","resourceVersion":"330","creationTimestamp":"2023-06-26T18:46:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"c36517ea0d43b6839ecd7e61be583393","kubernetes.io/config.mirror":"c36517ea0d43b6839ecd7e61be583393","kubernetes.io/config.seen":"2023-06-26T18:46:26.996342653Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0626 18:46:52.851421  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:52.851434  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.851445  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.851455  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.853300  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:52.853320  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.853331  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.853340  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.853353  421526 round_trippers.go:580]     Audit-Id: 49bf4be2-8761-404c-8dc2-65bd14952082
	I0626 18:46:52.853365  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.853373  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.853383  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.853501  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:52.853778  421526 pod_ready.go:92] pod "kube-apiserver-multinode-306845" in "kube-system" namespace has status "Ready":"True"
	I0626 18:46:52.853792  421526 pod_ready.go:81] duration metric: took 4.867945ms waiting for pod "kube-apiserver-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:52.853800  421526 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:52.853838  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-306845
	I0626 18:46:52.853846  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.853853  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.853859  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.855750  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:52.855774  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.855783  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.855789  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.855794  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.855800  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.855805  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.855812  421526 round_trippers.go:580]     Audit-Id: 28746b6e-7211-475b-9f27-d4c836caaf04
	I0626 18:46:52.855908  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-306845","namespace":"kube-system","uid":"39ac4739-5588-4f57-8ea6-2769d8db08a9","resourceVersion":"322","creationTimestamp":"2023-06-26T18:46:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d1d6a742238884753c6b9e158a03a88d","kubernetes.io/config.mirror":"d1d6a742238884753c6b9e158a03a88d","kubernetes.io/config.seen":"2023-06-26T18:46:26.996344729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0626 18:46:52.856271  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:52.856282  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.856290  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.856296  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.857791  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:52.857809  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.857819  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.857828  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.857838  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.857851  421526 round_trippers.go:580]     Audit-Id: f8803e30-a44b-4093-b077-0579850a82fa
	I0626 18:46:52.857860  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.857872  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.857983  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:52.858236  421526 pod_ready.go:92] pod "kube-controller-manager-multinode-306845" in "kube-system" namespace has status "Ready":"True"
	I0626 18:46:52.858248  421526 pod_ready.go:81] duration metric: took 4.442008ms waiting for pod "kube-controller-manager-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:52.858256  421526 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sk9fw" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:52.858294  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sk9fw
	I0626 18:46:52.858301  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.858308  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.858314  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.860027  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:52.860043  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.860050  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.860056  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.860062  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.860067  421526 round_trippers.go:580]     Audit-Id: b699ca48-51d2-4bed-8c71-160cef827a69
	I0626 18:46:52.860072  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.860079  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.860166  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sk9fw","generateName":"kube-proxy-","namespace":"kube-system","uid":"b17ff684-f726-4e3f-9e2e-2270a77f0712","resourceVersion":"410","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"923e24ed-c31b-4710-b3c3-f3667483f706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"923e24ed-c31b-4710-b3c3-f3667483f706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0626 18:46:52.860478  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:52.860488  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:52.860495  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:52.860503  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:52.862141  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:46:52.862170  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:52.862181  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:52.862191  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:52.862200  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:52.862212  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:52 GMT
	I0626 18:46:52.862221  421526 round_trippers.go:580]     Audit-Id: eacc6c76-9d02-4648-abbc-459879aa6d94
	I0626 18:46:52.862229  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:52.862306  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:52.862583  421526 pod_ready.go:92] pod "kube-proxy-sk9fw" in "kube-system" namespace has status "Ready":"True"
	I0626 18:46:52.862596  421526 pod_ready.go:81] duration metric: took 4.334881ms waiting for pod "kube-proxy-sk9fw" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:52.862603  421526 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:53.038025  421526 request.go:628] Waited for 175.358987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-306845
	I0626 18:46:53.038105  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-306845
	I0626 18:46:53.038112  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:53.038123  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:53.038132  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:53.040479  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:53.040497  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:53.040504  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:53.040510  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:53.040515  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:53.040521  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:53 GMT
	I0626 18:46:53.040526  421526 round_trippers.go:580]     Audit-Id: 05e06e59-bdc8-4c2a-92ed-383d4e275da5
	I0626 18:46:53.040531  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:53.040656  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-306845","namespace":"kube-system","uid":"6fdedf36-be16-44b5-b0bf-acb8ed9ada95","resourceVersion":"293","creationTimestamp":"2023-06-26T18:46:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b5afba694f43941ae62fa00a5d7320f4","kubernetes.io/config.mirror":"b5afba694f43941ae62fa00a5d7320f4","kubernetes.io/config.seen":"2023-06-26T18:46:20.468830188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0626 18:46:53.238425  421526 request.go:628] Waited for 197.349227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:53.238489  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:46:53.238495  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:53.238506  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:53.238520  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:53.240747  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:53.240770  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:53.240780  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:53.240788  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:53.240794  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:53.240802  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:53 GMT
	I0626 18:46:53.240810  421526 round_trippers.go:580]     Audit-Id: 53c42147-9a68-498d-9565-3e72efcb8591
	I0626 18:46:53.240816  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:53.240934  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:46:53.241262  421526 pod_ready.go:92] pod "kube-scheduler-multinode-306845" in "kube-system" namespace has status "Ready":"True"
	I0626 18:46:53.241279  421526 pod_ready.go:81] duration metric: took 378.67028ms waiting for pod "kube-scheduler-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:46:53.241290  421526 pod_ready.go:38] duration metric: took 10.915312277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 18:46:53.241310  421526 api_server.go:52] waiting for apiserver process to appear ...
	I0626 18:46:53.241373  421526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 18:46:53.250961  421526 command_runner.go:130] > 1409
	I0626 18:46:53.251713  421526 api_server.go:72] duration metric: took 12.644869374s to wait for apiserver process to appear ...
	I0626 18:46:53.251732  421526 api_server.go:88] waiting for apiserver healthz status ...
	I0626 18:46:53.251746  421526 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0626 18:46:53.256609  421526 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0626 18:46:53.256680  421526 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0626 18:46:53.256688  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:53.256696  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:53.256705  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:53.257614  421526 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0626 18:46:53.257628  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:53.257635  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:53.257642  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:53.257650  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:53.257658  421526 round_trippers.go:580]     Content-Length: 263
	I0626 18:46:53.257663  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:53 GMT
	I0626 18:46:53.257671  421526 round_trippers.go:580]     Audit-Id: 0e4a66ea-f41c-4d45-a1c5-99153fe22cf5
	I0626 18:46:53.257677  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:53.257695  421526 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0626 18:46:53.257772  421526 api_server.go:141] control plane version: v1.27.3
	I0626 18:46:53.257786  421526 api_server.go:131] duration metric: took 6.049559ms to wait for apiserver health ...
	I0626 18:46:53.257793  421526 system_pods.go:43] waiting for kube-system pods to appear ...
	I0626 18:46:53.438094  421526 request.go:628] Waited for 180.230281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0626 18:46:53.438163  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0626 18:46:53.438174  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:53.438182  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:53.438189  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:53.441461  421526 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 18:46:53.441486  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:53.441498  421526 round_trippers.go:580]     Audit-Id: de1d29c4-a583-4ca6-8142-c245a5e49b51
	I0626 18:46:53.441506  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:53.441514  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:53.441522  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:53.441538  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:53.441549  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:53 GMT
	I0626 18:46:53.442574  421526 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"442","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0626 18:46:53.445172  421526 system_pods.go:59] 8 kube-system pods found
	I0626 18:46:53.445194  421526 system_pods.go:61] "coredns-5d78c9869d-d67vq" [631de45d-1e2e-45b8-bdb0-1220d4b68aef] Running
	I0626 18:46:53.445201  421526 system_pods.go:61] "etcd-multinode-306845" [8e600dee-f767-4680-b831-a3ff0dba8338] Running
	I0626 18:46:53.445208  421526 system_pods.go:61] "kindnet-grd84" [df4468e3-83cd-4131-8236-f57f2ceab981] Running
	I0626 18:46:53.445215  421526 system_pods.go:61] "kube-apiserver-multinode-306845" [0cbdbfe4-b817-467c-ab7e-361e8faf4005] Running
	I0626 18:46:53.445227  421526 system_pods.go:61] "kube-controller-manager-multinode-306845" [39ac4739-5588-4f57-8ea6-2769d8db08a9] Running
	I0626 18:46:53.445237  421526 system_pods.go:61] "kube-proxy-sk9fw" [b17ff684-f726-4e3f-9e2e-2270a77f0712] Running
	I0626 18:46:53.445247  421526 system_pods.go:61] "kube-scheduler-multinode-306845" [6fdedf36-be16-44b5-b0bf-acb8ed9ada95] Running
	I0626 18:46:53.445253  421526 system_pods.go:61] "storage-provisioner" [61d6f823-d74e-4855-a446-94018e9ddcd8] Running
	I0626 18:46:53.445258  421526 system_pods.go:74] duration metric: took 187.460875ms to wait for pod list to return data ...
	I0626 18:46:53.445268  421526 default_sa.go:34] waiting for default service account to be created ...
	I0626 18:46:53.637627  421526 request.go:628] Waited for 192.276089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0626 18:46:53.637705  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0626 18:46:53.637716  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:53.637728  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:53.637738  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:53.640169  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:53.640190  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:53.640198  421526 round_trippers.go:580]     Content-Length: 261
	I0626 18:46:53.640203  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:53 GMT
	I0626 18:46:53.640209  421526 round_trippers.go:580]     Audit-Id: de5c45f3-ed04-4d27-ac85-ca4ade445d96
	I0626 18:46:53.640216  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:53.640225  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:53.640232  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:53.640244  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:53.640279  421526 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ca897676-a336-4642-9bc6-d9bd22b83d2a","resourceVersion":"337","creationTimestamp":"2023-06-26T18:46:39Z"}}]}
	I0626 18:46:53.640501  421526 default_sa.go:45] found service account: "default"
	I0626 18:46:53.640518  421526 default_sa.go:55] duration metric: took 195.244276ms for default service account to be created ...
	I0626 18:46:53.640527  421526 system_pods.go:116] waiting for k8s-apps to be running ...
	I0626 18:46:53.837970  421526 request.go:628] Waited for 197.355478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0626 18:46:53.838112  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0626 18:46:53.838143  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:53.838158  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:53.838170  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:53.841530  421526 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 18:46:53.841556  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:53.841565  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:53.841572  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:53.841578  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:53.841586  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:53 GMT
	I0626 18:46:53.841592  421526 round_trippers.go:580]     Audit-Id: b21c437c-16b7-4b21-8060-780e05774e84
	I0626 18:46:53.841597  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:53.842188  421526 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"442","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0626 18:46:53.844122  421526 system_pods.go:86] 8 kube-system pods found
	I0626 18:46:53.844145  421526 system_pods.go:89] "coredns-5d78c9869d-d67vq" [631de45d-1e2e-45b8-bdb0-1220d4b68aef] Running
	I0626 18:46:53.844150  421526 system_pods.go:89] "etcd-multinode-306845" [8e600dee-f767-4680-b831-a3ff0dba8338] Running
	I0626 18:46:53.844154  421526 system_pods.go:89] "kindnet-grd84" [df4468e3-83cd-4131-8236-f57f2ceab981] Running
	I0626 18:46:53.844158  421526 system_pods.go:89] "kube-apiserver-multinode-306845" [0cbdbfe4-b817-467c-ab7e-361e8faf4005] Running
	I0626 18:46:53.844163  421526 system_pods.go:89] "kube-controller-manager-multinode-306845" [39ac4739-5588-4f57-8ea6-2769d8db08a9] Running
	I0626 18:46:53.844168  421526 system_pods.go:89] "kube-proxy-sk9fw" [b17ff684-f726-4e3f-9e2e-2270a77f0712] Running
	I0626 18:46:53.844172  421526 system_pods.go:89] "kube-scheduler-multinode-306845" [6fdedf36-be16-44b5-b0bf-acb8ed9ada95] Running
	I0626 18:46:53.844176  421526 system_pods.go:89] "storage-provisioner" [61d6f823-d74e-4855-a446-94018e9ddcd8] Running
	I0626 18:46:53.844182  421526 system_pods.go:126] duration metric: took 203.650062ms to wait for k8s-apps to be running ...
	I0626 18:46:53.844199  421526 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 18:46:53.844242  421526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 18:46:53.854962  421526 system_svc.go:56] duration metric: took 10.756973ms WaitForService to wait for kubelet.
	I0626 18:46:53.854986  421526 kubeadm.go:581] duration metric: took 13.248142528s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 18:46:53.855049  421526 node_conditions.go:102] verifying NodePressure condition ...
	I0626 18:46:54.038477  421526 request.go:628] Waited for 183.338027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0626 18:46:54.038554  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0626 18:46:54.038559  421526 round_trippers.go:469] Request Headers:
	I0626 18:46:54.038567  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:46:54.038577  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:46:54.041011  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:46:54.041032  421526 round_trippers.go:577] Response Headers:
	I0626 18:46:54.041041  421526 round_trippers.go:580]     Audit-Id: f2deb66f-680b-42ec-8681-2fe00de0ef6a
	I0626 18:46:54.041046  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:46:54.041052  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:46:54.041058  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:46:54.041063  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:46:54.041070  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:46:54 GMT
	I0626 18:46:54.041179  421526 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"447"},"items":[{"metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0626 18:46:54.041564  421526 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0626 18:46:54.041581  421526 node_conditions.go:123] node cpu capacity is 8
	I0626 18:46:54.041596  421526 node_conditions.go:105] duration metric: took 186.538102ms to run NodePressure ...
	I0626 18:46:54.041609  421526 start.go:228] waiting for startup goroutines ...
	I0626 18:46:54.041618  421526 start.go:233] waiting for cluster config update ...
	I0626 18:46:54.041630  421526 start.go:242] writing updated cluster config ...
	I0626 18:46:54.044553  421526 out.go:177] 
	I0626 18:46:54.046066  421526 config.go:182] Loaded profile config "multinode-306845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:46:54.046149  421526 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/config.json ...
	I0626 18:46:54.047727  421526 out.go:177] * Starting worker node multinode-306845-m02 in cluster multinode-306845
	I0626 18:46:54.048930  421526 cache.go:122] Beginning downloading kic base image for docker with crio
	I0626 18:46:54.050220  421526 out.go:177] * Pulling base image ...
	I0626 18:46:54.051637  421526 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 18:46:54.051662  421526 cache.go:57] Caching tarball of preloaded images
	I0626 18:46:54.051729  421526 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon
	I0626 18:46:54.051770  421526 preload.go:174] Found /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0626 18:46:54.051785  421526 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 18:46:54.051901  421526 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/config.json ...
	I0626 18:46:54.068017  421526 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon, skipping pull
	I0626 18:46:54.068043  421526 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 exists in daemon, skipping load
	I0626 18:46:54.068062  421526 cache.go:195] Successfully downloaded all kic artifacts
	I0626 18:46:54.068107  421526 start.go:365] acquiring machines lock for multinode-306845-m02: {Name:mk7c8288fd8919abfbcaf8c4d9b25c33af2a7d20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:46:54.068243  421526 start.go:369] acquired machines lock for "multinode-306845-m02" in 109.293µs
	I0626 18:46:54.068271  421526 start.go:93] Provisioning new machine with config: &{Name:multinode-306845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-306845 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0626 18:46:54.068390  421526 start.go:125] createHost starting for "m02" (driver="docker")
	I0626 18:46:54.070430  421526 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0626 18:46:54.070558  421526 start.go:159] libmachine.API.Create for "multinode-306845" (driver="docker")
	I0626 18:46:54.070593  421526 client.go:168] LocalClient.Create starting
	I0626 18:46:54.070679  421526 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem
	I0626 18:46:54.070720  421526 main.go:141] libmachine: Decoding PEM data...
	I0626 18:46:54.070744  421526 main.go:141] libmachine: Parsing certificate...
	I0626 18:46:54.070809  421526 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem
	I0626 18:46:54.070838  421526 main.go:141] libmachine: Decoding PEM data...
	I0626 18:46:54.070855  421526 main.go:141] libmachine: Parsing certificate...
	I0626 18:46:54.071100  421526 cli_runner.go:164] Run: docker network inspect multinode-306845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0626 18:46:54.086654  421526 network_create.go:76] Found existing network {name:multinode-306845 subnet:0xc00138b080 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0626 18:46:54.086697  421526 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-306845-m02" container
	I0626 18:46:54.086750  421526 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0626 18:46:54.101627  421526 cli_runner.go:164] Run: docker volume create multinode-306845-m02 --label name.minikube.sigs.k8s.io=multinode-306845-m02 --label created_by.minikube.sigs.k8s.io=true
	I0626 18:46:54.117404  421526 oci.go:103] Successfully created a docker volume multinode-306845-m02
	I0626 18:46:54.117481  421526 cli_runner.go:164] Run: docker run --rm --name multinode-306845-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-306845-m02 --entrypoint /usr/bin/test -v multinode-306845-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -d /var/lib
	I0626 18:46:54.613037  421526 oci.go:107] Successfully prepared a docker volume multinode-306845-m02
	I0626 18:46:54.613100  421526 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 18:46:54.613127  421526 kic.go:190] Starting extracting preloaded images to volume ...
	I0626 18:46:54.613193  421526 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-306845-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -I lz4 -xf /preloaded.tar -C /extractDir
	I0626 18:46:59.474995  421526 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-306845-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 -I lz4 -xf /preloaded.tar -C /extractDir: (4.861749737s)
	I0626 18:46:59.475026  421526 kic.go:199] duration metric: took 4.861896 seconds to extract preloaded images to volume
	W0626 18:46:59.475154  421526 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0626 18:46:59.475240  421526 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0626 18:46:59.521147  421526 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-306845-m02 --name multinode-306845-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-306845-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-306845-m02 --network multinode-306845 --ip 192.168.58.3 --volume multinode-306845-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953
	I0626 18:46:59.817805  421526 cli_runner.go:164] Run: docker container inspect multinode-306845-m02 --format={{.State.Running}}
	I0626 18:46:59.834254  421526 cli_runner.go:164] Run: docker container inspect multinode-306845-m02 --format={{.State.Status}}
	I0626 18:46:59.851682  421526 cli_runner.go:164] Run: docker exec multinode-306845-m02 stat /var/lib/dpkg/alternatives/iptables
	I0626 18:46:59.896206  421526 oci.go:144] the created container "multinode-306845-m02" has a running status.
	I0626 18:46:59.896241  421526 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845-m02/id_rsa...
	I0626 18:47:00.091503  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0626 18:47:00.091572  421526 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0626 18:47:00.114056  421526 cli_runner.go:164] Run: docker container inspect multinode-306845-m02 --format={{.State.Status}}
	I0626 18:47:00.134830  421526 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0626 18:47:00.134861  421526 kic_runner.go:114] Args: [docker exec --privileged multinode-306845-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0626 18:47:00.212543  421526 cli_runner.go:164] Run: docker container inspect multinode-306845-m02 --format={{.State.Status}}
	I0626 18:47:00.233738  421526 machine.go:88] provisioning docker machine ...
	I0626 18:47:00.233776  421526 ubuntu.go:169] provisioning hostname "multinode-306845-m02"
	I0626 18:47:00.233828  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845-m02
	I0626 18:47:00.252843  421526 main.go:141] libmachine: Using SSH client type: native
	I0626 18:47:00.253304  421526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0626 18:47:00.253326  421526 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-306845-m02 && echo "multinode-306845-m02" | sudo tee /etc/hostname
	I0626 18:47:00.475588  421526 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-306845-m02
	
	I0626 18:47:00.475680  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845-m02
	I0626 18:47:00.493676  421526 main.go:141] libmachine: Using SSH client type: native
	I0626 18:47:00.494143  421526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0626 18:47:00.494165  421526 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-306845-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-306845-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-306845-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 18:47:00.624935  421526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 18:47:00.624976  421526 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16761-330054/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-330054/.minikube}
	I0626 18:47:00.625000  421526 ubuntu.go:177] setting up certificates
	I0626 18:47:00.625013  421526 provision.go:83] configureAuth start
	I0626 18:47:00.625084  421526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-306845-m02
	I0626 18:47:00.640030  421526 provision.go:138] copyHostCerts
	I0626 18:47:00.640076  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem
	I0626 18:47:00.640124  421526 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem, removing ...
	I0626 18:47:00.640133  421526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem
	I0626 18:47:00.640194  421526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem (1082 bytes)
	I0626 18:47:00.640265  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem
	I0626 18:47:00.640282  421526 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem, removing ...
	I0626 18:47:00.640291  421526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem
	I0626 18:47:00.640313  421526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem (1123 bytes)
	I0626 18:47:00.640361  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem
	I0626 18:47:00.640377  421526 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem, removing ...
	I0626 18:47:00.640383  421526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem
	I0626 18:47:00.640403  421526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem (1679 bytes)
	I0626 18:47:00.640449  421526 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem org=jenkins.multinode-306845-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-306845-m02]
	I0626 18:47:00.725931  421526 provision.go:172] copyRemoteCerts
	I0626 18:47:00.725995  421526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 18:47:00.726033  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845-m02
	I0626 18:47:00.741807  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845-m02/id_rsa Username:docker}
	I0626 18:47:00.833375  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0626 18:47:00.833435  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 18:47:00.854595  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0626 18:47:00.854650  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0626 18:47:00.875622  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0626 18:47:00.875684  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0626 18:47:00.896649  421526 provision.go:86] duration metric: configureAuth took 271.618431ms
	I0626 18:47:00.896672  421526 ubuntu.go:193] setting minikube options for container-runtime
	I0626 18:47:00.896856  421526 config.go:182] Loaded profile config "multinode-306845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:47:00.896987  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845-m02
	I0626 18:47:00.914871  421526 main.go:141] libmachine: Using SSH client type: native
	I0626 18:47:00.915301  421526 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33164 <nil> <nil>}
	I0626 18:47:00.915320  421526 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 18:47:01.125247  421526 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 18:47:01.125273  421526 machine.go:91] provisioned docker machine in 891.513703ms
	I0626 18:47:01.125281  421526 client.go:171] LocalClient.Create took 7.05467992s
	I0626 18:47:01.125298  421526 start.go:167] duration metric: libmachine.API.Create for "multinode-306845" took 7.054739399s
	I0626 18:47:01.125307  421526 start.go:300] post-start starting for "multinode-306845-m02" (driver="docker")
	I0626 18:47:01.125324  421526 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 18:47:01.125384  421526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 18:47:01.125440  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845-m02
	I0626 18:47:01.142249  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845-m02/id_rsa Username:docker}
	I0626 18:47:01.234154  421526 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 18:47:01.237123  421526 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0626 18:47:01.237148  421526 command_runner.go:130] > NAME="Ubuntu"
	I0626 18:47:01.237156  421526 command_runner.go:130] > VERSION_ID="22.04"
	I0626 18:47:01.237164  421526 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0626 18:47:01.237170  421526 command_runner.go:130] > VERSION_CODENAME=jammy
	I0626 18:47:01.237173  421526 command_runner.go:130] > ID=ubuntu
	I0626 18:47:01.237177  421526 command_runner.go:130] > ID_LIKE=debian
	I0626 18:47:01.237182  421526 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0626 18:47:01.237186  421526 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0626 18:47:01.237192  421526 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0626 18:47:01.237199  421526 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0626 18:47:01.237203  421526 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0626 18:47:01.237257  421526 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0626 18:47:01.237280  421526 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0626 18:47:01.237291  421526 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0626 18:47:01.237299  421526 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0626 18:47:01.237308  421526 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/addons for local assets ...
	I0626 18:47:01.237357  421526 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/files for local assets ...
	I0626 18:47:01.237433  421526 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem -> 3369352.pem in /etc/ssl/certs
	I0626 18:47:01.237450  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem -> /etc/ssl/certs/3369352.pem
	I0626 18:47:01.237580  421526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 18:47:01.245257  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem --> /etc/ssl/certs/3369352.pem (1708 bytes)
	I0626 18:47:01.266647  421526 start.go:303] post-start completed in 141.320904ms
	I0626 18:47:01.266996  421526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-306845-m02
	I0626 18:47:01.282649  421526 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/config.json ...
	I0626 18:47:01.282908  421526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0626 18:47:01.282995  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845-m02
	I0626 18:47:01.298975  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845-m02/id_rsa Username:docker}
	I0626 18:47:01.389697  421526 command_runner.go:130] > 18%!
	(MISSING)I0626 18:47:01.389790  421526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0626 18:47:01.393910  421526 command_runner.go:130] > 241G
	I0626 18:47:01.393941  421526 start.go:128] duration metric: createHost completed in 7.32553729s
	I0626 18:47:01.393949  421526 start.go:83] releasing machines lock for "multinode-306845-m02", held for 7.325695179s
	I0626 18:47:01.394005  421526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-306845-m02
	I0626 18:47:01.412414  421526 out.go:177] * Found network options:
	I0626 18:47:01.414174  421526 out.go:177]   - NO_PROXY=192.168.58.2
	W0626 18:47:01.415549  421526 proxy.go:119] fail to check proxy env: Error ip not in block
	W0626 18:47:01.415617  421526 proxy.go:119] fail to check proxy env: Error ip not in block
	I0626 18:47:01.415690  421526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 18:47:01.415732  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845-m02
	I0626 18:47:01.415786  421526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 18:47:01.415858  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845-m02
	I0626 18:47:01.433516  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845-m02/id_rsa Username:docker}
	I0626 18:47:01.433767  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845-m02/id_rsa Username:docker}
	I0626 18:47:01.628504  421526 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0626 18:47:01.656387  421526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0626 18:47:01.660393  421526 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0626 18:47:01.660420  421526 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0626 18:47:01.660432  421526 command_runner.go:130] > Device: b0h/176d	Inode: 2344971     Links: 1
	I0626 18:47:01.660443  421526 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 18:47:01.660454  421526 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0626 18:47:01.660464  421526 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0626 18:47:01.660475  421526 command_runner.go:130] > Change: 2023-06-26 18:26:22.195668408 +0000
	I0626 18:47:01.660482  421526 command_runner.go:130] >  Birth: 2023-06-26 18:26:22.195668408 +0000
	I0626 18:47:01.660574  421526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 18:47:01.677745  421526 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0626 18:47:01.677826  421526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 18:47:01.704225  421526 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0626 18:47:01.704297  421526 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0626 18:47:01.704309  421526 start.go:466] detecting cgroup driver to use...
	I0626 18:47:01.704342  421526 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0626 18:47:01.704401  421526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 18:47:01.718409  421526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 18:47:01.728826  421526 docker.go:196] disabling cri-docker service (if available) ...
	I0626 18:47:01.728912  421526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 18:47:01.740821  421526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 18:47:01.753125  421526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0626 18:47:01.828900  421526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 18:47:01.904091  421526 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0626 18:47:01.904123  421526 docker.go:212] disabling docker service ...
	I0626 18:47:01.904165  421526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 18:47:01.921369  421526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 18:47:01.931738  421526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 18:47:02.008109  421526 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0626 18:47:02.008181  421526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 18:47:02.084323  421526 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0626 18:47:02.084408  421526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 18:47:02.094902  421526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 18:47:02.108685  421526 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0626 18:47:02.110128  421526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0626 18:47:02.110189  421526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:47:02.118796  421526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0626 18:47:02.118855  421526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:47:02.127481  421526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:47:02.136222  421526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:47:02.145213  421526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0626 18:47:02.153107  421526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0626 18:47:02.159741  421526 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0626 18:47:02.160339  421526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0626 18:47:02.167870  421526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0626 18:47:02.239301  421526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0626 18:47:02.343548  421526 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0626 18:47:02.343622  421526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0626 18:47:02.346982  421526 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0626 18:47:02.347005  421526 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0626 18:47:02.347015  421526 command_runner.go:130] > Device: b9h/185d	Inode: 186         Links: 1
	I0626 18:47:02.347025  421526 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 18:47:02.347034  421526 command_runner.go:130] > Access: 2023-06-26 18:47:02.332585605 +0000
	I0626 18:47:02.347044  421526 command_runner.go:130] > Modify: 2023-06-26 18:47:02.332585605 +0000
	I0626 18:47:02.347058  421526 command_runner.go:130] > Change: 2023-06-26 18:47:02.332585605 +0000
	I0626 18:47:02.347064  421526 command_runner.go:130] >  Birth: -
	I0626 18:47:02.347085  421526 start.go:534] Will wait 60s for crictl version
	I0626 18:47:02.347128  421526 ssh_runner.go:195] Run: which crictl
	I0626 18:47:02.350049  421526 command_runner.go:130] > /usr/bin/crictl
	I0626 18:47:02.350201  421526 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0626 18:47:02.380723  421526 command_runner.go:130] > Version:  0.1.0
	I0626 18:47:02.380742  421526 command_runner.go:130] > RuntimeName:  cri-o
	I0626 18:47:02.380746  421526 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0626 18:47:02.380751  421526 command_runner.go:130] > RuntimeApiVersion:  v1
	I0626 18:47:02.382820  421526 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0626 18:47:02.382897  421526 ssh_runner.go:195] Run: crio --version
	I0626 18:47:02.413704  421526 command_runner.go:130] > crio version 1.24.6
	I0626 18:47:02.413724  421526 command_runner.go:130] > Version:          1.24.6
	I0626 18:47:02.413733  421526 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0626 18:47:02.413755  421526 command_runner.go:130] > GitTreeState:     clean
	I0626 18:47:02.413767  421526 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0626 18:47:02.413773  421526 command_runner.go:130] > GoVersion:        go1.18.2
	I0626 18:47:02.413778  421526 command_runner.go:130] > Compiler:         gc
	I0626 18:47:02.413788  421526 command_runner.go:130] > Platform:         linux/amd64
	I0626 18:47:02.413798  421526 command_runner.go:130] > Linkmode:         dynamic
	I0626 18:47:02.413808  421526 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 18:47:02.413815  421526 command_runner.go:130] > SeccompEnabled:   true
	I0626 18:47:02.413819  421526 command_runner.go:130] > AppArmorEnabled:  false
	I0626 18:47:02.415319  421526 ssh_runner.go:195] Run: crio --version
	I0626 18:47:02.447341  421526 command_runner.go:130] > crio version 1.24.6
	I0626 18:47:02.447368  421526 command_runner.go:130] > Version:          1.24.6
	I0626 18:47:02.447377  421526 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0626 18:47:02.447383  421526 command_runner.go:130] > GitTreeState:     clean
	I0626 18:47:02.447391  421526 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0626 18:47:02.447398  421526 command_runner.go:130] > GoVersion:        go1.18.2
	I0626 18:47:02.447404  421526 command_runner.go:130] > Compiler:         gc
	I0626 18:47:02.447410  421526 command_runner.go:130] > Platform:         linux/amd64
	I0626 18:47:02.447418  421526 command_runner.go:130] > Linkmode:         dynamic
	I0626 18:47:02.447431  421526 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0626 18:47:02.447443  421526 command_runner.go:130] > SeccompEnabled:   true
	I0626 18:47:02.447453  421526 command_runner.go:130] > AppArmorEnabled:  false
	I0626 18:47:02.450880  421526 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0626 18:47:02.452272  421526 out.go:177]   - env NO_PROXY=192.168.58.2
	I0626 18:47:02.453520  421526 cli_runner.go:164] Run: docker network inspect multinode-306845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0626 18:47:02.468779  421526 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0626 18:47:02.472310  421526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 18:47:02.482061  421526 certs.go:56] Setting up /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845 for IP: 192.168.58.3
	I0626 18:47:02.482089  421526 certs.go:190] acquiring lock for shared ca certs: {Name:mk5dcd9e05f1fa507f67df494d102e50ef2554ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:47:02.482233  421526 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key
	I0626 18:47:02.482290  421526 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key
	I0626 18:47:02.482306  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0626 18:47:02.482322  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0626 18:47:02.482342  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0626 18:47:02.482376  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0626 18:47:02.482439  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935.pem (1338 bytes)
	W0626 18:47:02.482472  421526 certs.go:433] ignoring /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935_empty.pem, impossibly tiny 0 bytes
	I0626 18:47:02.482488  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem (1675 bytes)
	I0626 18:47:02.482521  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem (1082 bytes)
	I0626 18:47:02.482559  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem (1123 bytes)
	I0626 18:47:02.482593  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem (1679 bytes)
	I0626 18:47:02.482658  421526 certs.go:437] found cert: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem (1708 bytes)
	I0626 18:47:02.482693  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem -> /usr/share/ca-certificates/3369352.pem
	I0626 18:47:02.482713  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:47:02.482730  421526 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935.pem -> /usr/share/ca-certificates/336935.pem
	I0626 18:47:02.483135  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0626 18:47:02.504099  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0626 18:47:02.526533  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0626 18:47:02.548759  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0626 18:47:02.571273  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem --> /usr/share/ca-certificates/3369352.pem (1708 bytes)
	I0626 18:47:02.593610  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0626 18:47:02.614588  421526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/certs/336935.pem --> /usr/share/ca-certificates/336935.pem (1338 bytes)
	I0626 18:47:02.635750  421526 ssh_runner.go:195] Run: openssl version
	I0626 18:47:02.640815  421526 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0626 18:47:02.640907  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3369352.pem && ln -fs /usr/share/ca-certificates/3369352.pem /etc/ssl/certs/3369352.pem"
	I0626 18:47:02.649207  421526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3369352.pem
	I0626 18:47:02.652300  421526 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 26 18:32 /usr/share/ca-certificates/3369352.pem
	I0626 18:47:02.652331  421526 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 26 18:32 /usr/share/ca-certificates/3369352.pem
	I0626 18:47:02.652367  421526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3369352.pem
	I0626 18:47:02.658324  421526 command_runner.go:130] > 3ec20f2e
	I0626 18:47:02.658386  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3369352.pem /etc/ssl/certs/3ec20f2e.0"
	I0626 18:47:02.666957  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0626 18:47:02.675323  421526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:47:02.678404  421526 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 26 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:47:02.678438  421526 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 26 18:26 /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:47:02.678477  421526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0626 18:47:02.684513  421526 command_runner.go:130] > b5213941
	I0626 18:47:02.684723  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0626 18:47:02.692735  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/336935.pem && ln -fs /usr/share/ca-certificates/336935.pem /etc/ssl/certs/336935.pem"
	I0626 18:47:02.700797  421526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/336935.pem
	I0626 18:47:02.703810  421526 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 26 18:32 /usr/share/ca-certificates/336935.pem
	I0626 18:47:02.703856  421526 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 26 18:32 /usr/share/ca-certificates/336935.pem
	I0626 18:47:02.703897  421526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/336935.pem
	I0626 18:47:02.709826  421526 command_runner.go:130] > 51391683
	I0626 18:47:02.710070  421526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/336935.pem /etc/ssl/certs/51391683.0"
	I0626 18:47:02.718450  421526 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0626 18:47:02.721372  421526 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 18:47:02.721453  421526 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0626 18:47:02.721548  421526 ssh_runner.go:195] Run: crio config
	I0626 18:47:02.757095  421526 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0626 18:47:02.757123  421526 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0626 18:47:02.757130  421526 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0626 18:47:02.757134  421526 command_runner.go:130] > #
	I0626 18:47:02.757145  421526 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0626 18:47:02.757154  421526 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0626 18:47:02.757162  421526 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0626 18:47:02.757174  421526 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0626 18:47:02.757180  421526 command_runner.go:130] > # reload'.
	I0626 18:47:02.757190  421526 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0626 18:47:02.757205  421526 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0626 18:47:02.757216  421526 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0626 18:47:02.757225  421526 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0626 18:47:02.757234  421526 command_runner.go:130] > [crio]
	I0626 18:47:02.757244  421526 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0626 18:47:02.757256  421526 command_runner.go:130] > # containers images, in this directory.
	I0626 18:47:02.757267  421526 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0626 18:47:02.757276  421526 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0626 18:47:02.757288  421526 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0626 18:47:02.757301  421526 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0626 18:47:02.757314  421526 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0626 18:47:02.757325  421526 command_runner.go:130] > # storage_driver = "vfs"
	I0626 18:47:02.757337  421526 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0626 18:47:02.757348  421526 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0626 18:47:02.757357  421526 command_runner.go:130] > # storage_option = [
	I0626 18:47:02.757394  421526 command_runner.go:130] > # ]
	I0626 18:47:02.757410  421526 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0626 18:47:02.757420  421526 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0626 18:47:02.757432  421526 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0626 18:47:02.757443  421526 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0626 18:47:02.757456  421526 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0626 18:47:02.757463  421526 command_runner.go:130] > # always happen on a node reboot
	I0626 18:47:02.757471  421526 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0626 18:47:02.757481  421526 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0626 18:47:02.757493  421526 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0626 18:47:02.757511  421526 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0626 18:47:02.757522  421526 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0626 18:47:02.757535  421526 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0626 18:47:02.757556  421526 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0626 18:47:02.757570  421526 command_runner.go:130] > # internal_wipe = true
	I0626 18:47:02.757579  421526 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0626 18:47:02.757588  421526 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0626 18:47:02.757598  421526 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0626 18:47:02.757606  421526 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0626 18:47:02.757615  421526 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0626 18:47:02.757625  421526 command_runner.go:130] > [crio.api]
	I0626 18:47:02.757632  421526 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0626 18:47:02.757640  421526 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0626 18:47:02.757648  421526 command_runner.go:130] > # IP address on which the stream server will listen.
	I0626 18:47:02.757655  421526 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0626 18:47:02.757666  421526 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0626 18:47:02.757677  421526 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0626 18:47:02.757686  421526 command_runner.go:130] > # stream_port = "0"
	I0626 18:47:02.757694  421526 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0626 18:47:02.757704  421526 command_runner.go:130] > # stream_enable_tls = false
	I0626 18:47:02.757714  421526 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0626 18:47:02.757727  421526 command_runner.go:130] > # stream_idle_timeout = ""
	I0626 18:47:02.757737  421526 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0626 18:47:02.757750  421526 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0626 18:47:02.757757  421526 command_runner.go:130] > # minutes.
	I0626 18:47:02.757767  421526 command_runner.go:130] > # stream_tls_cert = ""
	I0626 18:47:02.757777  421526 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0626 18:47:02.757791  421526 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0626 18:47:02.757802  421526 command_runner.go:130] > # stream_tls_key = ""
	I0626 18:47:02.757813  421526 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0626 18:47:02.757829  421526 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0626 18:47:02.757843  421526 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0626 18:47:02.757854  421526 command_runner.go:130] > # stream_tls_ca = ""
	I0626 18:47:02.757866  421526 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 18:47:02.757876  421526 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0626 18:47:02.757888  421526 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0626 18:47:02.757899  421526 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0626 18:47:02.757958  421526 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0626 18:47:02.757973  421526 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0626 18:47:02.757979  421526 command_runner.go:130] > [crio.runtime]
	I0626 18:47:02.757989  421526 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0626 18:47:02.757998  421526 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0626 18:47:02.758008  421526 command_runner.go:130] > # "nofile=1024:2048"
	I0626 18:47:02.758018  421526 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0626 18:47:02.758029  421526 command_runner.go:130] > # default_ulimits = [
	I0626 18:47:02.758034  421526 command_runner.go:130] > # ]
	I0626 18:47:02.758048  421526 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0626 18:47:02.758055  421526 command_runner.go:130] > # no_pivot = false
	I0626 18:47:02.758068  421526 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0626 18:47:02.758080  421526 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0626 18:47:02.758091  421526 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0626 18:47:02.758100  421526 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0626 18:47:02.758112  421526 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0626 18:47:02.758123  421526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 18:47:02.758133  421526 command_runner.go:130] > # conmon = ""
	I0626 18:47:02.758140  421526 command_runner.go:130] > # Cgroup setting for conmon
	I0626 18:47:02.758155  421526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0626 18:47:02.758164  421526 command_runner.go:130] > conmon_cgroup = "pod"
	I0626 18:47:02.758175  421526 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0626 18:47:02.758187  421526 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0626 18:47:02.758198  421526 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0626 18:47:02.758208  421526 command_runner.go:130] > # conmon_env = [
	I0626 18:47:02.758218  421526 command_runner.go:130] > # ]
	I0626 18:47:02.758230  421526 command_runner.go:130] > # Additional environment variables to set for all the
	I0626 18:47:02.758241  421526 command_runner.go:130] > # containers. These are overridden if set in the
	I0626 18:47:02.758251  421526 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0626 18:47:02.758258  421526 command_runner.go:130] > # default_env = [
	I0626 18:47:02.758267  421526 command_runner.go:130] > # ]
	I0626 18:47:02.758276  421526 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0626 18:47:02.758285  421526 command_runner.go:130] > # selinux = false
	I0626 18:47:02.758295  421526 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0626 18:47:02.758309  421526 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0626 18:47:02.758321  421526 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0626 18:47:02.758332  421526 command_runner.go:130] > # seccomp_profile = ""
	I0626 18:47:02.758342  421526 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0626 18:47:02.758354  421526 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0626 18:47:02.758367  421526 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0626 18:47:02.758377  421526 command_runner.go:130] > # which might increase security.
	I0626 18:47:02.758388  421526 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0626 18:47:02.758401  421526 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0626 18:47:02.758414  421526 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0626 18:47:02.758427  421526 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0626 18:47:02.758441  421526 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0626 18:47:02.758453  421526 command_runner.go:130] > # This option supports live configuration reload.
	I0626 18:47:02.758463  421526 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0626 18:47:02.758471  421526 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0626 18:47:02.758480  421526 command_runner.go:130] > # the cgroup blockio controller.
	I0626 18:47:02.758486  421526 command_runner.go:130] > # blockio_config_file = ""
	I0626 18:47:02.758494  421526 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0626 18:47:02.758500  421526 command_runner.go:130] > # irqbalance daemon.
	I0626 18:47:02.758519  421526 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0626 18:47:02.758527  421526 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0626 18:47:02.758534  421526 command_runner.go:130] > # This option supports live configuration reload.
	I0626 18:47:02.758539  421526 command_runner.go:130] > # rdt_config_file = ""
	I0626 18:47:02.758549  421526 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0626 18:47:02.758555  421526 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0626 18:47:02.758563  421526 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0626 18:47:02.758568  421526 command_runner.go:130] > # separate_pull_cgroup = ""
	I0626 18:47:02.758577  421526 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0626 18:47:02.758585  421526 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0626 18:47:02.758590  421526 command_runner.go:130] > # will be added.
	I0626 18:47:02.758595  421526 command_runner.go:130] > # default_capabilities = [
	I0626 18:47:02.758599  421526 command_runner.go:130] > # 	"CHOWN",
	I0626 18:47:02.758605  421526 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0626 18:47:02.758611  421526 command_runner.go:130] > # 	"FSETID",
	I0626 18:47:02.758617  421526 command_runner.go:130] > # 	"FOWNER",
	I0626 18:47:02.758623  421526 command_runner.go:130] > # 	"SETGID",
	I0626 18:47:02.758633  421526 command_runner.go:130] > # 	"SETUID",
	I0626 18:47:02.758639  421526 command_runner.go:130] > # 	"SETPCAP",
	I0626 18:47:02.758648  421526 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0626 18:47:02.758654  421526 command_runner.go:130] > # 	"KILL",
	I0626 18:47:02.758663  421526 command_runner.go:130] > # ]
	I0626 18:47:02.758676  421526 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0626 18:47:02.758690  421526 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0626 18:47:02.758698  421526 command_runner.go:130] > # add_inheritable_capabilities = true
	I0626 18:47:02.758711  421526 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0626 18:47:02.758723  421526 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 18:47:02.758733  421526 command_runner.go:130] > # default_sysctls = [
	I0626 18:47:02.758740  421526 command_runner.go:130] > # ]
	I0626 18:47:02.758748  421526 command_runner.go:130] > # List of devices on the host that a
	I0626 18:47:02.758759  421526 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0626 18:47:02.758767  421526 command_runner.go:130] > # allowed_devices = [
	I0626 18:47:02.758773  421526 command_runner.go:130] > # 	"/dev/fuse",
	I0626 18:47:02.758782  421526 command_runner.go:130] > # ]
	I0626 18:47:02.758791  421526 command_runner.go:130] > # List of additional devices. specified as
	I0626 18:47:02.758846  421526 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0626 18:47:02.758854  421526 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0626 18:47:02.758864  421526 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0626 18:47:02.758876  421526 command_runner.go:130] > # additional_devices = [
	I0626 18:47:02.758882  421526 command_runner.go:130] > # ]
	I0626 18:47:02.758894  421526 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0626 18:47:02.758903  421526 command_runner.go:130] > # cdi_spec_dirs = [
	I0626 18:47:02.758912  421526 command_runner.go:130] > # 	"/etc/cdi",
	I0626 18:47:02.758921  421526 command_runner.go:130] > # 	"/var/run/cdi",
	I0626 18:47:02.758931  421526 command_runner.go:130] > # ]
	I0626 18:47:02.758941  421526 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0626 18:47:02.758950  421526 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0626 18:47:02.758956  421526 command_runner.go:130] > # Defaults to false.
	I0626 18:47:02.758967  421526 command_runner.go:130] > # device_ownership_from_security_context = false
	I0626 18:47:02.758981  421526 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0626 18:47:02.758991  421526 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0626 18:47:02.759001  421526 command_runner.go:130] > # hooks_dir = [
	I0626 18:47:02.759009  421526 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0626 18:47:02.759017  421526 command_runner.go:130] > # ]
	I0626 18:47:02.759028  421526 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0626 18:47:02.759041  421526 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0626 18:47:02.759050  421526 command_runner.go:130] > # its default mounts from the following two files:
	I0626 18:47:02.759053  421526 command_runner.go:130] > #
	I0626 18:47:02.759064  421526 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0626 18:47:02.759078  421526 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0626 18:47:02.759091  421526 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0626 18:47:02.759099  421526 command_runner.go:130] > #
	I0626 18:47:02.759114  421526 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0626 18:47:02.759127  421526 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0626 18:47:02.759140  421526 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0626 18:47:02.759149  421526 command_runner.go:130] > #      only add mounts it finds in this file.
	I0626 18:47:02.759153  421526 command_runner.go:130] > #
	I0626 18:47:02.759163  421526 command_runner.go:130] > # default_mounts_file = ""
	I0626 18:47:02.759175  421526 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0626 18:47:02.759187  421526 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0626 18:47:02.759197  421526 command_runner.go:130] > # pids_limit = 0
	I0626 18:47:02.759207  421526 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0626 18:47:02.759219  421526 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0626 18:47:02.759232  421526 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0626 18:47:02.759245  421526 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0626 18:47:02.759252  421526 command_runner.go:130] > # log_size_max = -1
	I0626 18:47:02.759264  421526 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0626 18:47:02.759275  421526 command_runner.go:130] > # log_to_journald = false
	I0626 18:47:02.759285  421526 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0626 18:47:02.759297  421526 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0626 18:47:02.759309  421526 command_runner.go:130] > # Path to directory for container attach sockets.
	I0626 18:47:02.759319  421526 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0626 18:47:02.759331  421526 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0626 18:47:02.759340  421526 command_runner.go:130] > # bind_mount_prefix = ""
	I0626 18:47:02.759346  421526 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0626 18:47:02.759354  421526 command_runner.go:130] > # read_only = false
	I0626 18:47:02.759365  421526 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0626 18:47:02.759378  421526 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0626 18:47:02.759386  421526 command_runner.go:130] > # live configuration reload.
	I0626 18:47:02.759396  421526 command_runner.go:130] > # log_level = "info"
	I0626 18:47:02.759405  421526 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0626 18:47:02.759417  421526 command_runner.go:130] > # This option supports live configuration reload.
	I0626 18:47:02.759426  421526 command_runner.go:130] > # log_filter = ""
	I0626 18:47:02.759436  421526 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0626 18:47:02.759446  421526 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0626 18:47:02.759454  421526 command_runner.go:130] > # separated by comma.
	I0626 18:47:02.759461  421526 command_runner.go:130] > # uid_mappings = ""
	I0626 18:47:02.759474  421526 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0626 18:47:02.759485  421526 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0626 18:47:02.759495  421526 command_runner.go:130] > # separated by comma.
	I0626 18:47:02.759502  421526 command_runner.go:130] > # gid_mappings = ""
	I0626 18:47:02.759514  421526 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0626 18:47:02.759527  421526 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 18:47:02.759541  421526 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 18:47:02.759554  421526 command_runner.go:130] > # minimum_mappable_uid = -1
	I0626 18:47:02.759564  421526 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0626 18:47:02.759578  421526 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0626 18:47:02.759588  421526 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0626 18:47:02.759599  421526 command_runner.go:130] > # minimum_mappable_gid = -1
	I0626 18:47:02.759612  421526 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0626 18:47:02.759625  421526 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0626 18:47:02.759637  421526 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0626 18:47:02.759645  421526 command_runner.go:130] > # ctr_stop_timeout = 30
	I0626 18:47:02.759651  421526 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0626 18:47:02.759688  421526 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0626 18:47:02.759700  421526 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0626 18:47:02.759710  421526 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0626 18:47:02.759720  421526 command_runner.go:130] > # drop_infra_ctr = true
	I0626 18:47:02.759734  421526 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0626 18:47:02.759748  421526 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0626 18:47:02.759762  421526 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0626 18:47:02.759770  421526 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0626 18:47:02.759778  421526 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0626 18:47:02.759789  421526 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0626 18:47:02.759800  421526 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0626 18:47:02.759812  421526 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0626 18:47:02.759822  421526 command_runner.go:130] > # pinns_path = ""
	I0626 18:47:02.759832  421526 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0626 18:47:02.759844  421526 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0626 18:47:02.759858  421526 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0626 18:47:02.759867  421526 command_runner.go:130] > # default_runtime = "runc"
	I0626 18:47:02.759872  421526 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0626 18:47:02.759884  421526 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0626 18:47:02.759901  421526 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0626 18:47:02.759913  421526 command_runner.go:130] > # creation as a file is not desired either.
	I0626 18:47:02.759928  421526 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0626 18:47:02.759939  421526 command_runner.go:130] > # the hostname is being managed dynamically.
	I0626 18:47:02.759950  421526 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0626 18:47:02.759957  421526 command_runner.go:130] > # ]
	I0626 18:47:02.759964  421526 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0626 18:47:02.759977  421526 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0626 18:47:02.759991  421526 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0626 18:47:02.760005  421526 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0626 18:47:02.760013  421526 command_runner.go:130] > #
	I0626 18:47:02.760022  421526 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0626 18:47:02.760033  421526 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0626 18:47:02.760042  421526 command_runner.go:130] > #  runtime_type = "oci"
	I0626 18:47:02.760050  421526 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0626 18:47:02.760058  421526 command_runner.go:130] > #  privileged_without_host_devices = false
	I0626 18:47:02.760063  421526 command_runner.go:130] > #  allowed_annotations = []
	I0626 18:47:02.760071  421526 command_runner.go:130] > # Where:
	I0626 18:47:02.760083  421526 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0626 18:47:02.760098  421526 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0626 18:47:02.760113  421526 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0626 18:47:02.760126  421526 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0626 18:47:02.760135  421526 command_runner.go:130] > #   in $PATH.
	I0626 18:47:02.760144  421526 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0626 18:47:02.760151  421526 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0626 18:47:02.760161  421526 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0626 18:47:02.760171  421526 command_runner.go:130] > #   state.
	I0626 18:47:02.760182  421526 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0626 18:47:02.760195  421526 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0626 18:47:02.760208  421526 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0626 18:47:02.760220  421526 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0626 18:47:02.760233  421526 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0626 18:47:02.760243  421526 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0626 18:47:02.760251  421526 command_runner.go:130] > #   The currently recognized values are:
	I0626 18:47:02.760261  421526 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0626 18:47:02.760276  421526 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0626 18:47:02.760290  421526 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0626 18:47:02.760303  421526 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0626 18:47:02.760318  421526 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0626 18:47:02.760331  421526 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0626 18:47:02.760343  421526 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0626 18:47:02.760354  421526 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0626 18:47:02.760363  421526 command_runner.go:130] > #   should be moved to the container's cgroup
	I0626 18:47:02.760371  421526 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0626 18:47:02.760380  421526 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0626 18:47:02.760387  421526 command_runner.go:130] > runtime_type = "oci"
	I0626 18:47:02.760397  421526 command_runner.go:130] > runtime_root = "/run/runc"
	I0626 18:47:02.760404  421526 command_runner.go:130] > runtime_config_path = ""
	I0626 18:47:02.760413  421526 command_runner.go:130] > monitor_path = ""
	I0626 18:47:02.760420  421526 command_runner.go:130] > monitor_cgroup = ""
	I0626 18:47:02.760429  421526 command_runner.go:130] > monitor_exec_cgroup = ""
	I0626 18:47:02.760491  421526 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0626 18:47:02.760503  421526 command_runner.go:130] > # running containers
	I0626 18:47:02.760508  421526 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0626 18:47:02.760514  421526 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0626 18:47:02.760524  421526 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0626 18:47:02.760532  421526 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0626 18:47:02.760540  421526 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0626 18:47:02.760551  421526 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0626 18:47:02.760556  421526 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0626 18:47:02.760561  421526 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0626 18:47:02.760568  421526 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0626 18:47:02.760572  421526 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0626 18:47:02.760581  421526 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0626 18:47:02.760588  421526 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0626 18:47:02.760597  421526 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0626 18:47:02.760605  421526 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0626 18:47:02.760615  421526 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0626 18:47:02.760623  421526 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0626 18:47:02.760635  421526 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0626 18:47:02.760645  421526 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0626 18:47:02.760653  421526 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0626 18:47:02.760663  421526 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0626 18:47:02.760669  421526 command_runner.go:130] > # Example:
	I0626 18:47:02.760674  421526 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0626 18:47:02.760681  421526 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0626 18:47:02.760686  421526 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0626 18:47:02.760693  421526 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0626 18:47:02.760697  421526 command_runner.go:130] > # cpuset = 0
	I0626 18:47:02.760703  421526 command_runner.go:130] > # cpushares = "0-1"
	I0626 18:47:02.760706  421526 command_runner.go:130] > # Where:
	I0626 18:47:02.760713  421526 command_runner.go:130] > # The workload name is workload-type.
	I0626 18:47:02.760720  421526 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0626 18:47:02.760728  421526 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0626 18:47:02.760736  421526 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0626 18:47:02.760746  421526 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0626 18:47:02.760752  421526 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0626 18:47:02.760757  421526 command_runner.go:130] > # 
	I0626 18:47:02.760763  421526 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0626 18:47:02.760769  421526 command_runner.go:130] > #
	I0626 18:47:02.760774  421526 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0626 18:47:02.760784  421526 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0626 18:47:02.760793  421526 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0626 18:47:02.760802  421526 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0626 18:47:02.760810  421526 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0626 18:47:02.760816  421526 command_runner.go:130] > [crio.image]
	I0626 18:47:02.760822  421526 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0626 18:47:02.760829  421526 command_runner.go:130] > # default_transport = "docker://"
	I0626 18:47:02.760835  421526 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0626 18:47:02.760843  421526 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0626 18:47:02.760849  421526 command_runner.go:130] > # global_auth_file = ""
	I0626 18:47:02.760855  421526 command_runner.go:130] > # The image used to instantiate infra containers.
	I0626 18:47:02.760876  421526 command_runner.go:130] > # This option supports live configuration reload.
	I0626 18:47:02.760885  421526 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0626 18:47:02.760892  421526 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0626 18:47:02.760900  421526 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0626 18:47:02.760907  421526 command_runner.go:130] > # This option supports live configuration reload.
	I0626 18:47:02.760912  421526 command_runner.go:130] > # pause_image_auth_file = ""
	I0626 18:47:02.760919  421526 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0626 18:47:02.760926  421526 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0626 18:47:02.760934  421526 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0626 18:47:02.760942  421526 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0626 18:47:02.760949  421526 command_runner.go:130] > # pause_command = "/pause"
	I0626 18:47:02.760955  421526 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0626 18:47:02.760964  421526 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0626 18:47:02.760971  421526 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0626 18:47:02.760979  421526 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0626 18:47:02.760987  421526 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0626 18:47:02.760992  421526 command_runner.go:130] > # signature_policy = ""
	I0626 18:47:02.761004  421526 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0626 18:47:02.761012  421526 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0626 18:47:02.761016  421526 command_runner.go:130] > # changing them here.
	I0626 18:47:02.761020  421526 command_runner.go:130] > # insecure_registries = [
	I0626 18:47:02.761024  421526 command_runner.go:130] > # ]
	I0626 18:47:02.761032  421526 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0626 18:47:02.761040  421526 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0626 18:47:02.761046  421526 command_runner.go:130] > # image_volumes = "mkdir"
	I0626 18:47:02.761055  421526 command_runner.go:130] > # Temporary directory to use for storing big files
	I0626 18:47:02.761061  421526 command_runner.go:130] > # big_files_temporary_dir = ""
	I0626 18:47:02.761069  421526 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0626 18:47:02.761074  421526 command_runner.go:130] > # CNI plugins.
	I0626 18:47:02.761079  421526 command_runner.go:130] > [crio.network]
	I0626 18:47:02.761086  421526 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0626 18:47:02.761096  421526 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0626 18:47:02.761102  421526 command_runner.go:130] > # cni_default_network = ""
	I0626 18:47:02.761108  421526 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0626 18:47:02.761115  421526 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0626 18:47:02.761121  421526 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0626 18:47:02.761127  421526 command_runner.go:130] > # plugin_dirs = [
	I0626 18:47:02.761132  421526 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0626 18:47:02.761137  421526 command_runner.go:130] > # ]
	I0626 18:47:02.761143  421526 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0626 18:47:02.761149  421526 command_runner.go:130] > [crio.metrics]
	I0626 18:47:02.761154  421526 command_runner.go:130] > # Globally enable or disable metrics support.
	I0626 18:47:02.761161  421526 command_runner.go:130] > # enable_metrics = false
	I0626 18:47:02.761170  421526 command_runner.go:130] > # Specify enabled metrics collectors.
	I0626 18:47:02.761177  421526 command_runner.go:130] > # Per default all metrics are enabled.
	I0626 18:47:02.761183  421526 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0626 18:47:02.761191  421526 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0626 18:47:02.761199  421526 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0626 18:47:02.761204  421526 command_runner.go:130] > # metrics_collectors = [
	I0626 18:47:02.761208  421526 command_runner.go:130] > # 	"operations",
	I0626 18:47:02.761215  421526 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0626 18:47:02.761220  421526 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0626 18:47:02.761227  421526 command_runner.go:130] > # 	"operations_errors",
	I0626 18:47:02.761231  421526 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0626 18:47:02.761237  421526 command_runner.go:130] > # 	"image_pulls_by_name",
	I0626 18:47:02.761242  421526 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0626 18:47:02.761248  421526 command_runner.go:130] > # 	"image_pulls_failures",
	I0626 18:47:02.761253  421526 command_runner.go:130] > # 	"image_pulls_successes",
	I0626 18:47:02.761257  421526 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0626 18:47:02.761264  421526 command_runner.go:130] > # 	"image_layer_reuse",
	I0626 18:47:02.761268  421526 command_runner.go:130] > # 	"containers_oom_total",
	I0626 18:47:02.761276  421526 command_runner.go:130] > # 	"containers_oom",
	I0626 18:47:02.761283  421526 command_runner.go:130] > # 	"processes_defunct",
	I0626 18:47:02.761287  421526 command_runner.go:130] > # 	"operations_total",
	I0626 18:47:02.761293  421526 command_runner.go:130] > # 	"operations_latency_seconds",
	I0626 18:47:02.761298  421526 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0626 18:47:02.761304  421526 command_runner.go:130] > # 	"operations_errors_total",
	I0626 18:47:02.761309  421526 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0626 18:47:02.761315  421526 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0626 18:47:02.761320  421526 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0626 18:47:02.761326  421526 command_runner.go:130] > # 	"image_pulls_success_total",
	I0626 18:47:02.761331  421526 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0626 18:47:02.761338  421526 command_runner.go:130] > # 	"containers_oom_count_total",
	I0626 18:47:02.761341  421526 command_runner.go:130] > # ]
	I0626 18:47:02.761347  421526 command_runner.go:130] > # The port on which the metrics server will listen.
	I0626 18:47:02.761353  421526 command_runner.go:130] > # metrics_port = 9090
	I0626 18:47:02.761358  421526 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0626 18:47:02.761364  421526 command_runner.go:130] > # metrics_socket = ""
	I0626 18:47:02.761370  421526 command_runner.go:130] > # The certificate for the secure metrics server.
	I0626 18:47:02.761378  421526 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0626 18:47:02.761386  421526 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0626 18:47:02.761393  421526 command_runner.go:130] > # certificate on any modification event.
	I0626 18:47:02.761397  421526 command_runner.go:130] > # metrics_cert = ""
	I0626 18:47:02.761405  421526 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0626 18:47:02.761412  421526 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0626 18:47:02.761416  421526 command_runner.go:130] > # metrics_key = ""
	I0626 18:47:02.761424  421526 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0626 18:47:02.761428  421526 command_runner.go:130] > [crio.tracing]
	I0626 18:47:02.761434  421526 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0626 18:47:02.761440  421526 command_runner.go:130] > # enable_tracing = false
	I0626 18:47:02.761446  421526 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0626 18:47:02.761452  421526 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0626 18:47:02.761457  421526 command_runner.go:130] > # Number of samples to collect per million spans.
	I0626 18:47:02.761464  421526 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0626 18:47:02.761470  421526 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0626 18:47:02.761476  421526 command_runner.go:130] > [crio.stats]
	I0626 18:47:02.761482  421526 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0626 18:47:02.761490  421526 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0626 18:47:02.761496  421526 command_runner.go:130] > # stats_collection_period = 0
	I0626 18:47:02.761538  421526 command_runner.go:130] ! time="2023-06-26 18:47:02.755073717Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0626 18:47:02.761555  421526 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0626 18:47:02.761618  421526 cni.go:84] Creating CNI manager for ""
	I0626 18:47:02.761627  421526 cni.go:137] 2 nodes found, recommending kindnet
	I0626 18:47:02.761640  421526 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0626 18:47:02.761660  421526 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-306845 NodeName:multinode-306845-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0626 18:47:02.761775  421526 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-306845-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0626 18:47:02.761824  421526 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-306845-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-306845 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0626 18:47:02.761872  421526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0626 18:47:02.769357  421526 command_runner.go:130] > kubeadm
	I0626 18:47:02.769380  421526 command_runner.go:130] > kubectl
	I0626 18:47:02.769386  421526 command_runner.go:130] > kubelet
	I0626 18:47:02.770023  421526 binaries.go:44] Found k8s binaries, skipping transfer
	I0626 18:47:02.770086  421526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0626 18:47:02.778036  421526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0626 18:47:02.793554  421526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0626 18:47:02.809663  421526 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0626 18:47:02.812853  421526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0626 18:47:02.822479  421526 host.go:66] Checking if "multinode-306845" exists ...
	I0626 18:47:02.822678  421526 config.go:182] Loaded profile config "multinode-306845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:47:02.822812  421526 start.go:301] JoinCluster: &{Name:multinode-306845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-306845 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:47:02.822927  421526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0626 18:47:02.822990  421526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:47:02.838645  421526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa Username:docker}
	I0626 18:47:02.980234  421526 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token p57a2d.v0382s9cccy14snt --discovery-token-ca-cert-hash sha256:de006eb5b127e50d4fc17a3a52624114d6dd8c90abcc2a4dd7bcc578abe0baac 
	I0626 18:47:02.984595  421526 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0626 18:47:02.984650  421526 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p57a2d.v0382s9cccy14snt --discovery-token-ca-cert-hash sha256:de006eb5b127e50d4fc17a3a52624114d6dd8c90abcc2a4dd7bcc578abe0baac --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-306845-m02"
	I0626 18:47:03.018261  421526 command_runner.go:130] > [preflight] Running pre-flight checks
	I0626 18:47:03.045983  421526 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0626 18:47:03.046006  421526 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1036-gcp
	I0626 18:47:03.046012  421526 command_runner.go:130] > OS: Linux
	I0626 18:47:03.046017  421526 command_runner.go:130] > CGROUPS_CPU: enabled
	I0626 18:47:03.046033  421526 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0626 18:47:03.046041  421526 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0626 18:47:03.046048  421526 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0626 18:47:03.046056  421526 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0626 18:47:03.046064  421526 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0626 18:47:03.046075  421526 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0626 18:47:03.046081  421526 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0626 18:47:03.046085  421526 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0626 18:47:03.122996  421526 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0626 18:47:03.123029  421526 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0626 18:47:03.147723  421526 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0626 18:47:03.147746  421526 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0626 18:47:03.147754  421526 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0626 18:47:03.219787  421526 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0626 18:47:05.233402  421526 command_runner.go:130] > This node has joined the cluster:
	I0626 18:47:05.233426  421526 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0626 18:47:05.233432  421526 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0626 18:47:05.233439  421526 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0626 18:47:05.236208  421526 command_runner.go:130] ! W0626 18:47:03.017791    1112 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0626 18:47:05.236250  421526 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1036-gcp\n", err: exit status 1
	I0626 18:47:05.236265  421526 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0626 18:47:05.236295  421526 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p57a2d.v0382s9cccy14snt --discovery-token-ca-cert-hash sha256:de006eb5b127e50d4fc17a3a52624114d6dd8c90abcc2a4dd7bcc578abe0baac --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-306845-m02": (2.251625331s)
	I0626 18:47:05.236315  421526 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0626 18:47:05.405870  421526 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0626 18:47:05.405908  421526 start.go:303] JoinCluster complete in 2.583094007s
	I0626 18:47:05.405921  421526 cni.go:84] Creating CNI manager for ""
	I0626 18:47:05.405929  421526 cni.go:137] 2 nodes found, recommending kindnet
	I0626 18:47:05.405969  421526 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0626 18:47:05.409393  421526 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0626 18:47:05.409413  421526 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0626 18:47:05.409422  421526 command_runner.go:130] > Device: 37h/55d	Inode: 2348905     Links: 1
	I0626 18:47:05.409432  421526 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0626 18:47:05.409441  421526 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0626 18:47:05.409452  421526 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0626 18:47:05.409460  421526 command_runner.go:130] > Change: 2023-06-26 18:26:22.579705851 +0000
	I0626 18:47:05.409465  421526 command_runner.go:130] >  Birth: 2023-06-26 18:26:22.555703512 +0000
	I0626 18:47:05.409511  421526 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0626 18:47:05.409521  421526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0626 18:47:05.425343  421526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0626 18:47:05.667418  421526 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0626 18:47:05.670750  421526 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0626 18:47:05.673253  421526 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0626 18:47:05.685019  421526 command_runner.go:130] > daemonset.apps/kindnet configured
	I0626 18:47:05.689142  421526 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:47:05.689353  421526 kapi.go:59] client config for multinode-306845: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.key", CAFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 18:47:05.689647  421526 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0626 18:47:05.689657  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:05.689665  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:05.689671  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:05.691630  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:47:05.691649  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:05.691657  421526 round_trippers.go:580]     Audit-Id: 9642d966-a37c-4358-963c-40232d66c300
	I0626 18:47:05.691663  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:05.691669  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:05.691674  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:05.691680  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:05.691685  421526 round_trippers.go:580]     Content-Length: 291
	I0626 18:47:05.691690  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:05 GMT
	I0626 18:47:05.691714  421526 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6089fab5-ac67-4428-9fcc-92ab5c9f4130","resourceVersion":"446","creationTimestamp":"2023-06-26T18:46:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0626 18:47:05.691804  421526 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-306845" context rescaled to 1 replicas
	I0626 18:47:05.691830  421526 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0626 18:47:05.693993  421526 out.go:177] * Verifying Kubernetes components...
	I0626 18:47:05.695432  421526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 18:47:05.707117  421526 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:47:05.707436  421526 kapi.go:59] client config for multinode-306845: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.crt", KeyFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/profiles/multinode-306845/client.key", CAFile:"/home/jenkins/minikube-integration/16761-330054/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19bcba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0626 18:47:05.707763  421526 node_ready.go:35] waiting up to 6m0s for node "multinode-306845-m02" to be "Ready" ...
	I0626 18:47:05.707843  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:05.707854  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:05.707866  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:05.707878  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:05.710112  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:05.710138  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:05.710150  421526 round_trippers.go:580]     Audit-Id: 388df970-8a36-4d36-b3f7-7dc45f2fce91
	I0626 18:47:05.710159  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:05.710168  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:05.710177  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:05.710194  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:05.710204  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:05 GMT
	I0626 18:47:05.710406  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"478","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0626 18:47:06.211549  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:06.211570  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:06.211578  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:06.211585  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:06.213970  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:06.213992  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:06.214001  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:06.214007  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:06.214012  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:06.214018  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:06.214024  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:06 GMT
	I0626 18:47:06.214029  421526 round_trippers.go:580]     Audit-Id: 28628688-4551-4944-b46b-d9fbceededa0
	I0626 18:47:06.214142  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"478","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0626 18:47:06.711700  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:06.711723  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:06.711732  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:06.711738  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:06.714253  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:06.714277  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:06.714289  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:06.714297  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:06.714305  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:06.714314  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:06 GMT
	I0626 18:47:06.714323  421526 round_trippers.go:580]     Audit-Id: ac0cbf8d-609e-4f45-8d3c-1320654510ce
	I0626 18:47:06.714332  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:06.714468  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"478","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0626 18:47:07.211039  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:07.211075  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:07.211086  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:07.211094  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:07.213514  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:07.213537  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:07.213549  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:07.213558  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:07.213567  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:07.213579  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:07.213591  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:07 GMT
	I0626 18:47:07.213600  421526 round_trippers.go:580]     Audit-Id: c9289d58-54e1-4bc9-8b8a-e3ba939489d9
	I0626 18:47:07.213726  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"478","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0626 18:47:07.711223  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:07.711243  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:07.711255  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:07.711262  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:07.713580  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:07.713611  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:07.713619  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:07.713625  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:07.713630  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:07.713636  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:07 GMT
	I0626 18:47:07.713642  421526 round_trippers.go:580]     Audit-Id: e1426367-dc7c-4f15-a077-79760a6d6368
	I0626 18:47:07.713647  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:07.713738  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"478","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0626 18:47:07.714133  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:08.211291  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:08.211322  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:08.211336  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:08.211346  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:08.213997  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:08.214015  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:08.214023  421526 round_trippers.go:580]     Audit-Id: 0eb844dc-e70c-4cdb-bff4-765c99077880
	I0626 18:47:08.214028  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:08.214034  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:08.214039  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:08.214045  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:08.214050  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:08 GMT
	I0626 18:47:08.214153  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"478","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0626 18:47:08.711878  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:08.711907  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:08.711918  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:08.711927  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:08.714286  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:08.714314  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:08.714326  421526 round_trippers.go:580]     Audit-Id: e9cd9ce4-214c-4a73-93d7-36a58afe0599
	I0626 18:47:08.714333  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:08.714339  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:08.714344  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:08.714349  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:08.714355  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:08 GMT
	I0626 18:47:08.714463  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"478","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0626 18:47:09.211005  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:09.211029  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:09.211037  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:09.211043  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:09.213383  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:09.213404  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:09.213412  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:09.213417  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:09.213423  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:09 GMT
	I0626 18:47:09.213428  421526 round_trippers.go:580]     Audit-Id: 4d3cacaa-7a67-425e-8e1d-d9e16451c9c9
	I0626 18:47:09.213433  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:09.213440  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:09.213538  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"478","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0626 18:47:09.711294  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:09.711318  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:09.711326  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:09.711332  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:09.713493  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:09.713526  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:09.713534  421526 round_trippers.go:580]     Audit-Id: 6d771736-3fc7-4332-a3fe-016da7a6e919
	I0626 18:47:09.713543  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:09.713554  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:09.713563  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:09.713577  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:09.713587  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:09 GMT
	I0626 18:47:09.713710  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"478","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0626 18:47:10.211369  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:10.211391  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:10.211400  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:10.211406  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:10.213813  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:10.213833  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:10.213840  421526 round_trippers.go:580]     Audit-Id: 52383bd6-3600-46bf-8e00-a3ab8583b915
	I0626 18:47:10.213846  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:10.213852  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:10.213859  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:10.213866  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:10.213875  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:10 GMT
	I0626 18:47:10.214037  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"498","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0626 18:47:10.214342  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:10.711628  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:10.711649  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:10.711657  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:10.711664  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:10.714052  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:10.714073  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:10.714081  421526 round_trippers.go:580]     Audit-Id: cc0b3013-479e-4702-a352-aaaa9f79aab8
	I0626 18:47:10.714087  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:10.714093  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:10.714098  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:10.714157  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:10.714166  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:10 GMT
	I0626 18:47:10.714259  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"498","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0626 18:47:11.211837  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:11.211859  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:11.211868  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:11.211874  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:11.214230  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:11.214258  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:11.214269  421526 round_trippers.go:580]     Audit-Id: 76bcceaa-97e2-4804-833d-9db597753a4e
	I0626 18:47:11.214279  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:11.214287  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:11.214294  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:11.214299  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:11.214305  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:11 GMT
	I0626 18:47:11.214524  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"498","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0626 18:47:11.710988  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:11.711012  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:11.711021  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:11.711027  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:11.713270  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:11.713296  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:11.713304  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:11.713310  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:11.713316  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:11 GMT
	I0626 18:47:11.713321  421526 round_trippers.go:580]     Audit-Id: 0216b4ca-f97a-4910-b23f-ff53e0009110
	I0626 18:47:11.713329  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:11.713338  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:11.713457  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"498","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0626 18:47:12.210957  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:12.210980  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:12.210988  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:12.210994  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:12.213485  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:12.213526  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:12.213538  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:12.213555  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:12.213561  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:12.213567  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:12.213574  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:12 GMT
	I0626 18:47:12.213583  421526 round_trippers.go:580]     Audit-Id: d59858e9-3281-4029-8cb1-cfd5ef77b123
	I0626 18:47:12.213682  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"498","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0626 18:47:12.711219  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:12.711250  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:12.711262  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:12.711271  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:12.713713  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:12.713740  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:12.713749  421526 round_trippers.go:580]     Audit-Id: f5452566-dfa4-4205-82eb-d2ead2a8be77
	I0626 18:47:12.713755  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:12.713761  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:12.713766  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:12.713775  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:12.713780  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:12 GMT
	I0626 18:47:12.713920  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"498","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0626 18:47:12.714257  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:13.211586  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:13.211607  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:13.211618  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:13.211627  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:13.214113  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:13.214142  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:13.214153  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:13.214162  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:13.214171  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:13 GMT
	I0626 18:47:13.214186  421526 round_trippers.go:580]     Audit-Id: c228aab7-1946-4821-a7bd-837581f5e347
	I0626 18:47:13.214201  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:13.214210  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:13.214346  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"498","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0626 18:47:13.710982  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:13.711003  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:13.711012  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:13.711019  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:13.713030  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:47:13.713051  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:13.713059  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:13.713065  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:13 GMT
	I0626 18:47:13.713071  421526 round_trippers.go:580]     Audit-Id: 5217a1d1-b893-4a84-ac6e-9bb57e33e029
	I0626 18:47:13.713076  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:13.713081  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:13.713095  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:13.713206  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"498","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0626 18:47:14.211847  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:14.211870  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:14.211879  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:14.211885  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:14.214121  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:14.214140  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:14.214147  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:14.214153  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:14.214159  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:14 GMT
	I0626 18:47:14.214165  421526 round_trippers.go:580]     Audit-Id: 9a14cb0e-adc1-4834-b3da-3e1d464dfb44
	I0626 18:47:14.214170  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:14.214175  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:14.214259  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"498","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0626 18:47:14.711253  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:14.711276  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:14.711287  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:14.711295  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:14.713709  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:14.713739  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:14.713750  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:14 GMT
	I0626 18:47:14.713758  421526 round_trippers.go:580]     Audit-Id: 824bc50c-1c44-452f-958d-fae0b01285a2
	I0626 18:47:14.713767  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:14.713775  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:14.713784  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:14.713801  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:14.713894  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"498","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0626 18:47:15.211471  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:15.211496  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:15.211506  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:15.211512  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:15.213994  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:15.214018  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:15.214028  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:15.214036  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:15.214044  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:15 GMT
	I0626 18:47:15.214052  421526 round_trippers.go:580]     Audit-Id: fda0a09a-cd20-48cc-bc64-2fa6375c3001
	I0626 18:47:15.214060  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:15.214070  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:15.214170  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"498","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0626 18:47:15.214457  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:15.711825  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:15.711847  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:15.711855  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:15.711862  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:15.714012  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:15.714038  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:15.714049  421526 round_trippers.go:580]     Audit-Id: fd09fe98-07fe-4fe0-9d0a-45e0853c6e87
	I0626 18:47:15.714057  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:15.714066  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:15.714079  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:15.714088  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:15.714105  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:15 GMT
	I0626 18:47:15.714200  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:16.211836  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:16.211857  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:16.211866  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:16.211872  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:16.214139  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:16.214169  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:16.214180  421526 round_trippers.go:580]     Audit-Id: 338ccef7-05e2-4025-aed9-8379dd1c1390
	I0626 18:47:16.214190  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:16.214200  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:16.214212  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:16.214224  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:16.214232  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:16 GMT
	I0626 18:47:16.214369  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:16.711183  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:16.711205  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:16.711213  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:16.711219  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:16.713567  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:16.713600  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:16.713611  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:16 GMT
	I0626 18:47:16.713620  421526 round_trippers.go:580]     Audit-Id: 365672e7-841f-4fda-a904-d6ca25c4a3a9
	I0626 18:47:16.713630  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:16.713639  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:16.713650  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:16.713660  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:16.713763  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:17.211428  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:17.211451  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:17.211461  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:17.211470  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:17.213920  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:17.213939  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:17.213947  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:17.213955  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:17.213964  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:17 GMT
	I0626 18:47:17.213971  421526 round_trippers.go:580]     Audit-Id: 0f9ea4c0-670a-4d7c-bf58-2d87bc3cf768
	I0626 18:47:17.213990  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:17.213999  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:17.214108  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:17.711748  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:17.711769  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:17.711777  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:17.711783  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:17.713950  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:17.713970  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:17.713978  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:17.713984  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:17.713990  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:17.713996  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:17 GMT
	I0626 18:47:17.714001  421526 round_trippers.go:580]     Audit-Id: ac4d37ae-e91a-49f8-bbef-3408c45085b6
	I0626 18:47:17.714007  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:17.714100  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:17.714397  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:18.211676  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:18.211697  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:18.211705  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:18.211710  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:18.214087  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:18.214109  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:18.214117  421526 round_trippers.go:580]     Audit-Id: 9c6a552f-40a8-4bec-b398-ce7cf1fe18b8
	I0626 18:47:18.214123  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:18.214128  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:18.214133  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:18.214139  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:18.214144  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:18 GMT
	I0626 18:47:18.214236  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:18.711461  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:18.711488  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:18.711500  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:18.711509  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:18.713993  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:18.714019  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:18.714030  421526 round_trippers.go:580]     Audit-Id: d91d723d-b7e7-48b3-a195-a92d12b0f761
	I0626 18:47:18.714039  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:18.714047  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:18.714057  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:18.714069  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:18.714078  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:18 GMT
	I0626 18:47:18.714208  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:19.211749  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:19.211775  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:19.211788  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:19.211798  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:19.214326  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:19.214352  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:19.214367  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:19.214379  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:19.214396  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:19 GMT
	I0626 18:47:19.214405  421526 round_trippers.go:580]     Audit-Id: 3faead0a-3849-47f2-9a75-29e8e2a88190
	I0626 18:47:19.214415  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:19.214424  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:19.214549  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:19.711254  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:19.711277  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:19.711289  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:19.711298  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:19.713199  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:47:19.713225  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:19.713236  421526 round_trippers.go:580]     Audit-Id: 32f7ac58-fb00-4e19-80f5-620167c8704f
	I0626 18:47:19.713244  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:19.713251  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:19.713259  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:19.713277  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:19.713290  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:19 GMT
	I0626 18:47:19.713381  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:20.211076  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:20.211101  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:20.211110  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:20.211122  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:20.213629  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:20.213660  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:20.213672  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:20.213682  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:20.213692  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:20.213699  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:20 GMT
	I0626 18:47:20.213705  421526 round_trippers.go:580]     Audit-Id: 3ca15919-7eb7-4e96-a075-46472b1ab7fe
	I0626 18:47:20.213713  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:20.213820  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:20.214133  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:20.711369  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:20.711394  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:20.711403  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:20.711409  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:20.713913  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:20.713936  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:20.713944  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:20.713950  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:20.713958  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:20 GMT
	I0626 18:47:20.713966  421526 round_trippers.go:580]     Audit-Id: 51de0901-bb45-440d-820f-b0bcc4e8e292
	I0626 18:47:20.713974  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:20.713993  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:20.714096  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:21.211737  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:21.211758  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:21.211767  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:21.211773  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:21.214204  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:21.214233  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:21.214244  421526 round_trippers.go:580]     Audit-Id: 21d6c626-cb12-4b15-8452-eec15c853186
	I0626 18:47:21.214256  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:21.214269  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:21.214281  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:21.214292  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:21.214304  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:21 GMT
	I0626 18:47:21.214443  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:21.711964  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:21.711991  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:21.712002  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:21.712009  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:21.714084  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:21.714112  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:21.714125  421526 round_trippers.go:580]     Audit-Id: 27b5af10-afe5-4d66-8142-7e846f5ca4b5
	I0626 18:47:21.714135  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:21.714145  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:21.714155  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:21.714168  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:21.714178  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:21 GMT
	I0626 18:47:21.714300  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:22.211885  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:22.211905  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:22.211914  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:22.211920  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:22.214263  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:22.214283  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:22.214290  421526 round_trippers.go:580]     Audit-Id: 6b0d2b21-dc43-4f8d-aa32-f7a94a5fb54a
	I0626 18:47:22.214296  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:22.214302  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:22.214308  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:22.214317  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:22.214325  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:22 GMT
	I0626 18:47:22.214444  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:22.214748  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:22.711107  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:22.711133  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:22.711141  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:22.711147  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:22.713621  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:22.713650  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:22.713661  421526 round_trippers.go:580]     Audit-Id: 68a5afc8-3cb8-480c-89fb-536f4138200b
	I0626 18:47:22.713670  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:22.713680  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:22.713689  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:22.713699  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:22.713712  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:22 GMT
	I0626 18:47:22.713806  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:23.211047  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:23.211071  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:23.211079  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:23.211085  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:23.213397  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:23.213418  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:23.213426  421526 round_trippers.go:580]     Audit-Id: 82b403a1-7b1a-4a42-ab81-753507dbe54d
	I0626 18:47:23.213431  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:23.213437  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:23.213442  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:23.213453  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:23.213461  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:23 GMT
	I0626 18:47:23.213585  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:23.711153  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:23.711173  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:23.711181  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:23.711187  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:23.713544  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:23.713568  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:23.713576  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:23 GMT
	I0626 18:47:23.713582  421526 round_trippers.go:580]     Audit-Id: 758c6b77-9f44-4eeb-af7e-98f7eb11caa0
	I0626 18:47:23.713587  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:23.713593  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:23.713599  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:23.713605  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:23.713689  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:24.211426  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:24.211455  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:24.211467  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:24.211474  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:24.213847  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:24.213877  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:24.213888  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:24.213901  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:24.213909  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:24.213915  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:24.213922  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:24 GMT
	I0626 18:47:24.213932  421526 round_trippers.go:580]     Audit-Id: 964dd6f8-45f5-42c6-b002-eab9dc7b5c2d
	I0626 18:47:24.214079  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:24.711866  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:24.711890  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:24.711899  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:24.711905  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:24.714364  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:24.714388  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:24.714396  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:24.714401  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:24.714407  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:24.714412  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:24 GMT
	I0626 18:47:24.714420  421526 round_trippers.go:580]     Audit-Id: 53c9112c-7ef8-470e-a138-89246476752a
	I0626 18:47:24.714426  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:24.714521  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:24.714834  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:25.211111  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:25.211137  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:25.211145  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:25.211153  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:25.213377  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:25.213403  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:25.213414  421526 round_trippers.go:580]     Audit-Id: c757d97e-d3b4-496b-a62d-34ad85d4f799
	I0626 18:47:25.213427  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:25.213435  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:25.213444  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:25.213457  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:25.213465  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:25 GMT
	I0626 18:47:25.213580  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:25.711140  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:25.711161  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:25.711169  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:25.711176  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:25.713275  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:25.713297  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:25.713305  421526 round_trippers.go:580]     Audit-Id: 9fed20f9-3822-4606-b306-731e70f7011d
	I0626 18:47:25.713311  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:25.713317  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:25.713322  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:25.713328  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:25.713333  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:25 GMT
	I0626 18:47:25.713454  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:26.210935  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:26.210956  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:26.210965  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:26.210972  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:26.213362  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:26.213387  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:26.213397  421526 round_trippers.go:580]     Audit-Id: cb2cceb5-9b58-4eb0-a478-d482dc00a616
	I0626 18:47:26.213406  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:26.213415  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:26.213422  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:26.213427  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:26.213436  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:26 GMT
	I0626 18:47:26.213538  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:26.711103  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:26.711124  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:26.711132  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:26.711139  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:26.713546  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:26.713573  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:26.713581  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:26.713587  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:26.713593  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:26.713598  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:26.713604  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:26 GMT
	I0626 18:47:26.713609  421526 round_trippers.go:580]     Audit-Id: 6bc32ab9-6474-4320-971a-dc797b1964c8
	I0626 18:47:26.713723  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:27.211362  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:27.211383  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:27.211391  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:27.211397  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:27.213918  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:27.213947  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:27.213958  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:27 GMT
	I0626 18:47:27.213968  421526 round_trippers.go:580]     Audit-Id: 36d90388-7822-4cc7-bf1e-8fbc36fae692
	I0626 18:47:27.213977  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:27.213985  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:27.213996  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:27.214008  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:27.214146  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:27.214557  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:27.711740  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:27.711760  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:27.711768  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:27.711775  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:27.714441  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:27.714463  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:27.714470  421526 round_trippers.go:580]     Audit-Id: ef2c92a8-7c12-4638-92d2-5242bac834b5
	I0626 18:47:27.714477  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:27.714482  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:27.714488  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:27.714494  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:27.714500  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:27 GMT
	I0626 18:47:27.714592  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:28.211106  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:28.211128  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:28.211137  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:28.211143  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:28.213580  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:28.213602  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:28.213610  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:28.213616  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:28 GMT
	I0626 18:47:28.213622  421526 round_trippers.go:580]     Audit-Id: 850ac871-7234-466f-b166-5d395d528441
	I0626 18:47:28.213627  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:28.213633  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:28.213638  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:28.213786  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:28.711434  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:28.711459  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:28.711467  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:28.711473  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:28.713735  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:28.713757  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:28.713766  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:28.713772  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:28 GMT
	I0626 18:47:28.713777  421526 round_trippers.go:580]     Audit-Id: bc582000-93b5-463e-a2af-763e0cc08453
	I0626 18:47:28.713782  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:28.713788  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:28.713794  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:28.713897  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:29.211427  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:29.211453  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:29.211461  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:29.211468  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:29.213911  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:29.213936  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:29.213947  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:29.213956  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:29.213965  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:29.213977  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:29.213987  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:29 GMT
	I0626 18:47:29.213995  421526 round_trippers.go:580]     Audit-Id: eb7e7b2d-f7dd-4d4f-9fee-35eaa92d6acd
	I0626 18:47:29.214109  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:29.711725  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:29.711747  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:29.711755  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:29.711761  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:29.714175  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:29.714197  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:29.714205  421526 round_trippers.go:580]     Audit-Id: deb831fc-3307-49f8-8508-13f8313c656a
	I0626 18:47:29.714213  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:29.714223  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:29.714232  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:29.714245  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:29.714255  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:29 GMT
	I0626 18:47:29.714348  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:29.714657  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:30.211146  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:30.211167  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:30.211177  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:30.211196  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:30.213490  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:30.213508  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:30.213515  421526 round_trippers.go:580]     Audit-Id: de214a8e-0178-4a25-9812-4a4c16e7c03d
	I0626 18:47:30.213521  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:30.213527  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:30.213532  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:30.213537  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:30.213548  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:30 GMT
	I0626 18:47:30.213650  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:30.711222  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:30.711244  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:30.711253  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:30.711259  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:30.713691  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:30.713712  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:30.713720  421526 round_trippers.go:580]     Audit-Id: 9625b401-1117-4c9c-9de5-6717e9848458
	I0626 18:47:30.713726  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:30.713734  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:30.713743  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:30.713751  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:30.713759  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:30 GMT
	I0626 18:47:30.713877  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:31.211589  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:31.211626  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:31.211638  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:31.211648  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:31.213943  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:31.213963  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:31.213970  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:31.213976  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:31.213982  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:31.213987  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:31.213994  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:31 GMT
	I0626 18:47:31.214000  421526 round_trippers.go:580]     Audit-Id: a190a181-d90b-4097-af7f-fa7ca947eed6
	I0626 18:47:31.214126  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:31.711801  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:31.711823  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:31.711835  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:31.711844  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:31.714157  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:31.714179  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:31.714190  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:31.714198  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:31.714206  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:31.714215  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:31.714224  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:31 GMT
	I0626 18:47:31.714237  421526 round_trippers.go:580]     Audit-Id: 4fb7b70d-6b4a-4907-a385-3bf76c775c57
	I0626 18:47:31.714337  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:32.210932  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:32.210951  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:32.210959  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:32.210969  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:32.213245  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:32.213272  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:32.213284  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:32.213292  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:32.213298  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:32.213303  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:32 GMT
	I0626 18:47:32.213309  421526 round_trippers.go:580]     Audit-Id: a9776852-6ddf-4f46-8904-cea8ea89170e
	I0626 18:47:32.213314  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:32.213447  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:32.213958  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:32.710987  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:32.711012  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:32.711020  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:32.711027  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:32.713678  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:32.713706  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:32.713716  421526 round_trippers.go:580]     Audit-Id: 8e827142-595e-4834-8b23-347d440598b2
	I0626 18:47:32.713730  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:32.713738  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:32.713747  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:32.713755  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:32.713768  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:32 GMT
	I0626 18:47:32.713897  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:33.211426  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:33.211447  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:33.211456  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:33.211462  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:33.213746  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:33.213766  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:33.213774  421526 round_trippers.go:580]     Audit-Id: 5ac1f362-92ed-4006-a982-483fcc881687
	I0626 18:47:33.213783  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:33.213789  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:33.213795  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:33.213801  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:33.213809  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:33 GMT
	I0626 18:47:33.213914  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:33.711610  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:33.711631  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:33.711639  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:33.711646  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:33.713917  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:33.713936  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:33.713943  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:33 GMT
	I0626 18:47:33.713949  421526 round_trippers.go:580]     Audit-Id: df7bdb11-5e91-4f48-96c5-c793ea72eed2
	I0626 18:47:33.713955  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:33.713960  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:33.713965  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:33.713973  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:33.714048  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:34.211272  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:34.211295  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:34.211303  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:34.211310  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:34.213711  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:34.213730  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:34.213738  421526 round_trippers.go:580]     Audit-Id: 1e94c7da-3466-4b7f-b3e8-680218e4f1b6
	I0626 18:47:34.213744  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:34.213749  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:34.213754  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:34.213759  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:34.213765  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:34 GMT
	I0626 18:47:34.213872  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:34.214174  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:34.711569  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:34.711592  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:34.711604  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:34.711612  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:34.713910  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:34.713936  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:34.713947  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:34.713954  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:34.713959  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:34.713966  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:34 GMT
	I0626 18:47:34.713974  421526 round_trippers.go:580]     Audit-Id: 56f0c497-3d59-4701-8ff6-8b785d876a48
	I0626 18:47:34.713982  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:34.714121  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:35.210923  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:35.210944  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:35.210952  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:35.210958  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:35.213056  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:35.213083  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:35.213092  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:35.213101  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:35.213110  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:35.213120  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:35.213131  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:35 GMT
	I0626 18:47:35.213149  421526 round_trippers.go:580]     Audit-Id: 87cef91f-4d8d-498f-ad47-e22a2d701629
	I0626 18:47:35.213280  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:35.711888  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:35.711911  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:35.711920  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:35.711927  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:35.714064  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:35.714091  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:35.714102  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:35.714112  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:35 GMT
	I0626 18:47:35.714121  421526 round_trippers.go:580]     Audit-Id: d9e34692-bf88-41c7-80a6-c23193d8c69f
	I0626 18:47:35.714130  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:35.714140  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:35.714155  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:35.714246  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:36.211867  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:36.211889  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:36.211900  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:36.211908  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:36.214066  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:36.214087  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:36.214094  421526 round_trippers.go:580]     Audit-Id: badc1662-a02e-4655-9b2f-621dc211358b
	I0626 18:47:36.214099  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:36.214105  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:36.214110  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:36.214115  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:36.214121  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:36 GMT
	I0626 18:47:36.214235  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:36.214610  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:36.711891  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:36.711912  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:36.711921  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:36.711927  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:36.714304  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:36.714331  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:36.714339  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:36.714345  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:36.714351  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:36.714357  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:36.714362  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:36 GMT
	I0626 18:47:36.714368  421526 round_trippers.go:580]     Audit-Id: b85345eb-84ba-4407-98c3-c950a2072051
	I0626 18:47:36.714459  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:37.210975  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:37.210997  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:37.211005  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:37.211012  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:37.213340  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:37.213361  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:37.213371  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:37.213382  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:37.213390  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:37.213404  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:37.213412  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:37 GMT
	I0626 18:47:37.213418  421526 round_trippers.go:580]     Audit-Id: 3c662142-cc76-4656-b9ea-b68e7b1a3804
	I0626 18:47:37.213538  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:37.711058  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:37.711081  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:37.711091  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:37.711098  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:37.713098  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:47:37.713124  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:37.713136  421526 round_trippers.go:580]     Audit-Id: be73837d-809b-440b-a3d3-fa83707a6998
	I0626 18:47:37.713145  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:37.713154  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:37.713167  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:37.713176  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:37.713187  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:37 GMT
	I0626 18:47:37.713290  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:38.211969  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:38.211991  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:38.212003  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:38.212014  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:38.214530  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:38.214553  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:38.214562  421526 round_trippers.go:580]     Audit-Id: fe041c84-2678-4a9d-a4a7-9af825c7a4ed
	I0626 18:47:38.214570  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:38.214577  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:38.214587  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:38.214600  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:38.214613  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:38 GMT
	I0626 18:47:38.214733  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:38.215041  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:38.711283  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:38.711310  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:38.711322  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:38.711332  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:38.713723  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:38.713746  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:38.713756  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:38 GMT
	I0626 18:47:38.713763  421526 round_trippers.go:580]     Audit-Id: 2227c8ef-19d0-4270-9ebf-2dcf597fd0a0
	I0626 18:47:38.713771  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:38.713781  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:38.713790  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:38.713800  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:38.713888  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:39.211625  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:39.211651  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:39.211664  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:39.211673  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:39.213968  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:39.213993  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:39.214001  421526 round_trippers.go:580]     Audit-Id: b3485e16-09e7-48e9-8fa4-a44dc63d1318
	I0626 18:47:39.214007  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:39.214013  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:39.214018  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:39.214024  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:39.214030  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:39 GMT
	I0626 18:47:39.214155  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:39.711876  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:39.711902  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:39.711916  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:39.711924  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:39.714144  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:39.714169  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:39.714179  421526 round_trippers.go:580]     Audit-Id: c3327243-8914-4a87-9b73-2581342569c8
	I0626 18:47:39.714187  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:39.714205  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:39.714212  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:39.714220  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:39.714226  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:39 GMT
	I0626 18:47:39.714317  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:40.211086  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:40.211106  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:40.211124  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:40.211130  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:40.213426  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:40.213448  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:40.213455  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:40.213461  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:40 GMT
	I0626 18:47:40.213466  421526 round_trippers.go:580]     Audit-Id: 19e9eec6-dae1-4733-816c-81b7d12d06cc
	I0626 18:47:40.213472  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:40.213480  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:40.213485  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:40.213582  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:40.711126  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:40.711149  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:40.711158  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:40.711164  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:40.713731  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:40.713757  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:40.713768  421526 round_trippers.go:580]     Audit-Id: 45aa0d8c-52cd-43db-9d28-93377c0636d4
	I0626 18:47:40.713777  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:40.713787  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:40.713801  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:40.713810  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:40.713822  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:40 GMT
	I0626 18:47:40.713937  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:40.714269  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:41.211395  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:41.211423  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:41.211435  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:41.211445  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:41.213900  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:41.213926  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:41.213938  421526 round_trippers.go:580]     Audit-Id: 69c5ad2c-c5ba-4322-b558-ede6493b7218
	I0626 18:47:41.213947  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:41.213955  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:41.213960  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:41.213966  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:41.213971  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:41 GMT
	I0626 18:47:41.214080  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:41.711801  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:41.711836  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:41.711851  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:41.711861  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:41.714397  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:41.714426  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:41.714437  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:41.714447  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:41.714455  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:41.714466  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:41 GMT
	I0626 18:47:41.714474  421526 round_trippers.go:580]     Audit-Id: f71888e4-068d-4beb-883c-77cd355ee2eb
	I0626 18:47:41.714485  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:41.714595  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:42.211202  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:42.211224  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:42.211233  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:42.211240  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:42.213596  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:42.213615  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:42.213622  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:42.213628  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:42.213633  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:42.213638  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:42 GMT
	I0626 18:47:42.213644  421526 round_trippers.go:580]     Audit-Id: 4c7e73c2-315a-488a-b7b4-1c08d96a5c80
	I0626 18:47:42.213649  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:42.213735  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:42.711011  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:42.711032  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:42.711041  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:42.711047  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:42.713290  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:42.713317  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:42.713329  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:42 GMT
	I0626 18:47:42.713338  421526 round_trippers.go:580]     Audit-Id: f4e7249e-eda8-42e0-a9bf-531a830bef36
	I0626 18:47:42.713344  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:42.713350  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:42.713356  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:42.713364  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:42.713471  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:43.210969  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:43.210990  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:43.210998  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:43.211004  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:43.213428  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:43.213454  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:43.213461  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:43.213467  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:43.213472  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:43.213478  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:43 GMT
	I0626 18:47:43.213483  421526 round_trippers.go:580]     Audit-Id: 985fc0d0-5772-4398-b43f-bdf845dff6e9
	I0626 18:47:43.213488  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:43.213606  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:43.213908  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:43.711174  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:43.711197  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:43.711206  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:43.711212  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:43.713451  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:43.713481  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:43.713488  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:43.713494  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:43.713500  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:43 GMT
	I0626 18:47:43.713505  421526 round_trippers.go:580]     Audit-Id: 940154c0-163f-4698-9f51-ecea132dfa96
	I0626 18:47:43.713514  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:43.713525  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:43.713620  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:44.211180  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:44.211202  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:44.211218  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:44.211225  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:44.213868  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:44.213897  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:44.213907  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:44.213916  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:44.213924  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:44.213933  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:44.213943  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:44 GMT
	I0626 18:47:44.213953  421526 round_trippers.go:580]     Audit-Id: 28f71da5-6fc3-4982-bcee-16189a28ac5c
	I0626 18:47:44.214092  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:44.711997  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:44.712024  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:44.712036  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:44.712046  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:44.714597  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:44.714623  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:44.714631  421526 round_trippers.go:580]     Audit-Id: 40bec232-2ac3-451f-ab5d-5a9d2b483fc8
	I0626 18:47:44.714637  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:44.714643  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:44.714648  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:44.714654  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:44.714666  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:44 GMT
	I0626 18:47:44.714780  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:45.211270  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:45.211294  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:45.211302  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:45.211308  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:45.213657  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:45.213679  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:45.213688  421526 round_trippers.go:580]     Audit-Id: 87f18f97-86b2-4c22-8b29-60ba75320740
	I0626 18:47:45.213693  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:45.213699  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:45.213704  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:45.213710  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:45.213716  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:45 GMT
	I0626 18:47:45.213821  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:45.214111  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:45.711313  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:45.711339  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:45.711351  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:45.711361  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:45.713669  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:45.713692  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:45.713700  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:45.713706  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:45.713712  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:45 GMT
	I0626 18:47:45.713717  421526 round_trippers.go:580]     Audit-Id: 1d79468b-1438-4af0-80d0-39ea0ca67ad5
	I0626 18:47:45.713725  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:45.713736  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:45.713864  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:46.211536  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:46.211557  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:46.211566  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:46.211573  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:46.213959  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:46.213986  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:46.213998  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:46 GMT
	I0626 18:47:46.214008  421526 round_trippers.go:580]     Audit-Id: 0ab2e9ac-3ea2-4ab0-b390-c5b226fd9b3f
	I0626 18:47:46.214017  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:46.214028  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:46.214036  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:46.214042  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:46.214148  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:46.711770  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:46.711797  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:46.711805  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:46.711811  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:46.714207  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:46.714231  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:46.714244  421526 round_trippers.go:580]     Audit-Id: 20717f69-563c-4ebb-a196-1314b4f37da6
	I0626 18:47:46.714252  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:46.714257  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:46.714263  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:46.714268  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:46.714273  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:46 GMT
	I0626 18:47:46.714358  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:47.211542  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:47.211574  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:47.211587  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:47.211597  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:47.213974  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:47.213996  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:47.214005  421526 round_trippers.go:580]     Audit-Id: 046e3540-8712-4f66-8ebb-707d7fe3fbb5
	I0626 18:47:47.214014  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:47.214021  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:47.214029  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:47.214040  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:47.214049  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:47 GMT
	I0626 18:47:47.214174  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:47.214503  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:47.711830  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:47.711853  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:47.711861  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:47.711868  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:47.714170  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:47.714189  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:47.714199  421526 round_trippers.go:580]     Audit-Id: 9f7ddf44-7f6c-4299-86c6-45bf64057ffb
	I0626 18:47:47.714209  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:47.714217  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:47.714226  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:47.714238  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:47.714246  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:47 GMT
	I0626 18:47:47.714325  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:48.211962  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:48.211997  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:48.212009  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:48.212019  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:48.214487  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:48.214514  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:48.214522  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:48.214528  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:48.214533  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:48.214539  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:48 GMT
	I0626 18:47:48.214544  421526 round_trippers.go:580]     Audit-Id: fd3af150-46db-4917-af43-bda10e12b036
	I0626 18:47:48.214549  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:48.214676  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:48.711350  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:48.711377  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:48.711388  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:48.711396  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:48.713858  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:48.713886  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:48.713897  421526 round_trippers.go:580]     Audit-Id: 362cbad0-96e0-40df-a23b-0039113550dc
	I0626 18:47:48.713906  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:48.713914  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:48.713922  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:48.713931  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:48.713947  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:48 GMT
	I0626 18:47:48.714050  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:49.211709  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:49.211734  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.211746  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.211755  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.214210  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:49.214233  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.214243  421526 round_trippers.go:580]     Audit-Id: 5b4daeb2-d557-41ea-9955-f23187356168
	I0626 18:47:49.214252  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.214263  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.214270  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.214277  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.214284  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.214413  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"504","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0626 18:47:49.214755  421526 node_ready.go:58] node "multinode-306845-m02" has status "Ready":"False"
	I0626 18:47:49.711171  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:49.711191  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.711200  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.711206  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.713535  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:49.713560  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.713570  421526 round_trippers.go:580]     Audit-Id: 2c418ad2-314a-4ab9-ae83-8b58018ab741
	I0626 18:47:49.713578  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.713586  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.713595  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.713608  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.713620  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.713712  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"548","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0626 18:47:49.714067  421526 node_ready.go:49] node "multinode-306845-m02" has status "Ready":"True"
	I0626 18:47:49.714099  421526 node_ready.go:38] duration metric: took 44.006317689s waiting for node "multinode-306845-m02" to be "Ready" ...
	I0626 18:47:49.714109  421526 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 18:47:49.714179  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0626 18:47:49.714188  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.714195  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.714202  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.717336  421526 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0626 18:47:49.717360  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.717371  421526 round_trippers.go:580]     Audit-Id: 9b3910a7-df07-4f12-87a5-667771d1aef0
	I0626 18:47:49.717380  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.717388  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.717397  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.717422  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.717431  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.717963  421526 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"548"},"items":[{"metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"442","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0626 18:47:49.720092  421526 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-d67vq" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:49.720168  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-d67vq
	I0626 18:47:49.720177  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.720184  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.720190  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.722057  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:47:49.722082  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.722092  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.722098  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.722104  421526 round_trippers.go:580]     Audit-Id: 0699bea1-8ca6-4ec8-951b-e7cd66fb30a4
	I0626 18:47:49.722109  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.722114  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.722120  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.722237  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-d67vq","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"631de45d-1e2e-45b8-bdb0-1220d4b68aef","resourceVersion":"442","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"5ee75eb5-004c-43e6-b246-5e5684a486f7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ee75eb5-004c-43e6-b246-5e5684a486f7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0626 18:47:49.722733  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:47:49.722747  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.722755  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.722761  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.724445  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:47:49.724462  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.724478  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.724486  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.724492  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.724500  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.724506  421526 round_trippers.go:580]     Audit-Id: 95d278fe-4ecb-4655-9c9f-a790954ee75f
	I0626 18:47:49.724513  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.724607  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:47:49.724918  421526 pod_ready.go:92] pod "coredns-5d78c9869d-d67vq" in "kube-system" namespace has status "Ready":"True"
	I0626 18:47:49.724932  421526 pod_ready.go:81] duration metric: took 4.817934ms waiting for pod "coredns-5d78c9869d-d67vq" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:49.724941  421526 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:49.724985  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-306845
	I0626 18:47:49.724992  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.724999  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.725004  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.726658  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:47:49.726672  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.726678  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.726684  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.726689  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.726698  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.726703  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.726711  421526 round_trippers.go:580]     Audit-Id: 41fc7064-704f-45ae-b878-8a1150b1c861
	I0626 18:47:49.726783  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-306845","namespace":"kube-system","uid":"8e600dee-f767-4680-b831-a3ff0dba8338","resourceVersion":"296","creationTimestamp":"2023-06-26T18:46:27Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"08749df88deb7f3823978e88f0a29b74","kubernetes.io/config.mirror":"08749df88deb7f3823978e88f0a29b74","kubernetes.io/config.seen":"2023-06-26T18:46:26.996335605Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0626 18:47:49.727189  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:47:49.727204  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.727211  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.727217  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.729309  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:49.729325  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.729332  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.729341  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.729349  421526 round_trippers.go:580]     Audit-Id: 4abc14de-948c-43ff-9f99-0ccd9cfa2b06
	I0626 18:47:49.729361  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.729370  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.729382  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.729466  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:47:49.729750  421526 pod_ready.go:92] pod "etcd-multinode-306845" in "kube-system" namespace has status "Ready":"True"
	I0626 18:47:49.729763  421526 pod_ready.go:81] duration metric: took 4.817771ms waiting for pod "etcd-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:49.729776  421526 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:49.729821  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-306845
	I0626 18:47:49.729827  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.729834  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.729842  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.731501  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:47:49.731515  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.731528  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.731537  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.731548  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.731559  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.731568  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.731578  421526 round_trippers.go:580]     Audit-Id: b66b3afd-05fa-4057-814d-e900daeaca11
	I0626 18:47:49.731686  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-306845","namespace":"kube-system","uid":"0cbdbfe4-b817-467c-ab7e-361e8faf4005","resourceVersion":"330","creationTimestamp":"2023-06-26T18:46:27Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"c36517ea0d43b6839ecd7e61be583393","kubernetes.io/config.mirror":"c36517ea0d43b6839ecd7e61be583393","kubernetes.io/config.seen":"2023-06-26T18:46:26.996342653Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0626 18:47:49.732089  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:47:49.732103  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.732111  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.732117  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.733690  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:47:49.733707  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.733717  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.733725  421526 round_trippers.go:580]     Audit-Id: 5107ec2b-354b-4ca3-948f-d1185d14e6dc
	I0626 18:47:49.733736  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.733748  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.733753  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.733761  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.733910  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:47:49.734179  421526 pod_ready.go:92] pod "kube-apiserver-multinode-306845" in "kube-system" namespace has status "Ready":"True"
	I0626 18:47:49.734191  421526 pod_ready.go:81] duration metric: took 4.406787ms waiting for pod "kube-apiserver-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:49.734200  421526 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:49.734239  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-306845
	I0626 18:47:49.734246  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.734253  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.734259  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.735973  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:47:49.735990  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.735999  421526 round_trippers.go:580]     Audit-Id: 6ca19a92-273a-4c50-b7f1-e58a2e3ea3ab
	I0626 18:47:49.736011  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.736020  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.736038  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.736044  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.736049  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.736144  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-306845","namespace":"kube-system","uid":"39ac4739-5588-4f57-8ea6-2769d8db08a9","resourceVersion":"322","creationTimestamp":"2023-06-26T18:46:27Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d1d6a742238884753c6b9e158a03a88d","kubernetes.io/config.mirror":"d1d6a742238884753c6b9e158a03a88d","kubernetes.io/config.seen":"2023-06-26T18:46:26.996344729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0626 18:47:49.736505  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:47:49.736518  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.736525  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.736531  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.738126  421526 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0626 18:47:49.738147  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.738157  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.738166  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.738175  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.738182  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.738195  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.738211  421526 round_trippers.go:580]     Audit-Id: ff771a88-16b1-4dee-b0f1-98915018773e
	I0626 18:47:49.738324  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:47:49.738671  421526 pod_ready.go:92] pod "kube-controller-manager-multinode-306845" in "kube-system" namespace has status "Ready":"True"
	I0626 18:47:49.738688  421526 pod_ready.go:81] duration metric: took 4.482476ms waiting for pod "kube-controller-manager-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:49.738696  421526 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lgtgc" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:49.912094  421526 request.go:628] Waited for 173.316505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lgtgc
	I0626 18:47:49.912154  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lgtgc
	I0626 18:47:49.912158  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:49.912167  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:49.912173  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:49.914630  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:49.914653  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:49.914664  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:49.914672  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:49 GMT
	I0626 18:47:49.914680  421526 round_trippers.go:580]     Audit-Id: d9d72750-462d-41ad-9e78-77c24fcf9852
	I0626 18:47:49.914689  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:49.914712  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:49.914721  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:49.914851  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lgtgc","generateName":"kube-proxy-","namespace":"kube-system","uid":"39b1874e-14d5-45cc-ad38-67abeec8c5d0","resourceVersion":"515","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"923e24ed-c31b-4710-b3c3-f3667483f706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"923e24ed-c31b-4710-b3c3-f3667483f706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0626 18:47:50.111235  421526 request.go:628] Waited for 195.879355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:50.111297  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845-m02
	I0626 18:47:50.111302  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:50.111310  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:50.111317  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:50.113791  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:50.113823  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:50.113834  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:50 GMT
	I0626 18:47:50.113844  421526 round_trippers.go:580]     Audit-Id: 101bf120-4a4c-4cd8-95fe-e47875327997
	I0626 18:47:50.113852  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:50.113861  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:50.113869  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:50.113879  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:50.113993  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845-m02","uid":"9848da22-b288-4db5-b0df-3ad99e1a85b3","resourceVersion":"550","creationTimestamp":"2023-06-26T18:47:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:47:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5176 chars]
	I0626 18:47:50.114357  421526 pod_ready.go:92] pod "kube-proxy-lgtgc" in "kube-system" namespace has status "Ready":"True"
	I0626 18:47:50.114374  421526 pod_ready.go:81] duration metric: took 375.673042ms waiting for pod "kube-proxy-lgtgc" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:50.114386  421526 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sk9fw" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:50.311871  421526 request.go:628] Waited for 197.407815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sk9fw
	I0626 18:47:50.311931  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sk9fw
	I0626 18:47:50.311936  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:50.311944  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:50.311953  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:50.314299  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:50.314327  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:50.314335  421526 round_trippers.go:580]     Audit-Id: 2dafb630-b707-4f4e-8b24-2418c4d58f46
	I0626 18:47:50.314341  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:50.314347  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:50.314352  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:50.314358  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:50.314364  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:50 GMT
	I0626 18:47:50.314541  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sk9fw","generateName":"kube-proxy-","namespace":"kube-system","uid":"b17ff684-f726-4e3f-9e2e-2270a77f0712","resourceVersion":"410","creationTimestamp":"2023-06-26T18:46:40Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"923e24ed-c31b-4710-b3c3-f3667483f706","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"923e24ed-c31b-4710-b3c3-f3667483f706\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0626 18:47:50.511254  421526 request.go:628] Waited for 196.277889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:47:50.511308  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:47:50.511317  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:50.511325  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:50.511331  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:50.513670  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:50.513693  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:50.513701  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:50.513708  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:50.513713  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:50.513718  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:50 GMT
	I0626 18:47:50.513724  421526 round_trippers.go:580]     Audit-Id: 7bc2382f-e525-4135-9a88-3ac1f832b26d
	I0626 18:47:50.513729  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:50.514236  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:47:50.514758  421526 pod_ready.go:92] pod "kube-proxy-sk9fw" in "kube-system" namespace has status "Ready":"True"
	I0626 18:47:50.514772  421526 pod_ready.go:81] duration metric: took 400.374184ms waiting for pod "kube-proxy-sk9fw" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:50.514784  421526 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:50.711689  421526 request.go:628] Waited for 196.824281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-306845
	I0626 18:47:50.711750  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-306845
	I0626 18:47:50.711758  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:50.711771  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:50.711786  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:50.714267  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:50.714287  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:50.714294  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:50.714300  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:50 GMT
	I0626 18:47:50.714305  421526 round_trippers.go:580]     Audit-Id: dd7c5c21-f991-43f6-8e42-cd2bac5fc916
	I0626 18:47:50.714310  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:50.714316  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:50.714321  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:50.714417  421526 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-306845","namespace":"kube-system","uid":"6fdedf36-be16-44b5-b0bf-acb8ed9ada95","resourceVersion":"293","creationTimestamp":"2023-06-26T18:46:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b5afba694f43941ae62fa00a5d7320f4","kubernetes.io/config.mirror":"b5afba694f43941ae62fa00a5d7320f4","kubernetes.io/config.seen":"2023-06-26T18:46:20.468830188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-06-26T18:46:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0626 18:47:50.912192  421526 request.go:628] Waited for 197.361243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:47:50.912262  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-306845
	I0626 18:47:50.912269  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:50.912281  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:50.912291  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:50.914873  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:50.914891  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:50.914899  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:50 GMT
	I0626 18:47:50.914904  421526 round_trippers.go:580]     Audit-Id: 2a74cd02-8b44-48af-b0e7-29961be7613e
	I0626 18:47:50.914910  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:50.914915  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:50.914920  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:50.914925  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:50.915041  421526 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-06-26T18:46:23Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0626 18:47:50.915348  421526 pod_ready.go:92] pod "kube-scheduler-multinode-306845" in "kube-system" namespace has status "Ready":"True"
	I0626 18:47:50.915362  421526 pod_ready.go:81] duration metric: took 400.570502ms waiting for pod "kube-scheduler-multinode-306845" in "kube-system" namespace to be "Ready" ...
	I0626 18:47:50.915371  421526 pod_ready.go:38] duration metric: took 1.20124685s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0626 18:47:50.915395  421526 system_svc.go:44] waiting for kubelet service to be running ....
	I0626 18:47:50.915440  421526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 18:47:50.926547  421526 system_svc.go:56] duration metric: took 11.14194ms WaitForService to wait for kubelet.
	I0626 18:47:50.926574  421526 kubeadm.go:581] duration metric: took 45.234722123s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0626 18:47:50.926594  421526 node_conditions.go:102] verifying NodePressure condition ...
	I0626 18:47:51.112023  421526 request.go:628] Waited for 185.335053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0626 18:47:51.112076  421526 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0626 18:47:51.112081  421526 round_trippers.go:469] Request Headers:
	I0626 18:47:51.112088  421526 round_trippers.go:473]     Accept: application/json, */*
	I0626 18:47:51.112099  421526 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0626 18:47:51.114755  421526 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0626 18:47:51.114782  421526 round_trippers.go:577] Response Headers:
	I0626 18:47:51.114794  421526 round_trippers.go:580]     Cache-Control: no-cache, private
	I0626 18:47:51.114804  421526 round_trippers.go:580]     Content-Type: application/json
	I0626 18:47:51.114813  421526 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 926db036-5aab-4061-8c88-caa33317bc62
	I0626 18:47:51.114821  421526 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8dbcd6f8-8454-46e6-9d22-4dad3619df42
	I0626 18:47:51.114831  421526 round_trippers.go:580]     Date: Mon, 26 Jun 2023 18:47:51 GMT
	I0626 18:47:51.114844  421526 round_trippers.go:580]     Audit-Id: 02a60dfd-4f4d-4b65-81dc-587147255f14
	I0626 18:47:51.115063  421526 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"550"},"items":[{"metadata":{"name":"multinode-306845","uid":"d3a9d8d5-10be-4292-8233-d2a4ed4318ed","resourceVersion":"416","creationTimestamp":"2023-06-26T18:46:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-306845","kubernetes.io/os":"linux","minikube.k8s.io/commit":"759becbe25e432e7a4042c59713ee144df2072e1","minikube.k8s.io/name":"multinode-306845","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_06_26T18_46_27_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12168 chars]
	I0626 18:47:51.115535  421526 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0626 18:47:51.115575  421526 node_conditions.go:123] node cpu capacity is 8
	I0626 18:47:51.115592  421526 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0626 18:47:51.115596  421526 node_conditions.go:123] node cpu capacity is 8
	I0626 18:47:51.115600  421526 node_conditions.go:105] duration metric: took 189.001334ms to run NodePressure ...
	I0626 18:47:51.115610  421526 start.go:228] waiting for startup goroutines ...
	I0626 18:47:51.115645  421526 start.go:242] writing updated cluster config ...
	I0626 18:47:51.115936  421526 ssh_runner.go:195] Run: rm -f paused
	I0626 18:47:51.161871  421526 start.go:652] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0626 18:47:51.165108  421526 out.go:177] * Done! kubectl is now configured to use "multinode-306845" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jun 26 18:46:43 multinode-306845 crio[952]: time="2023-06-26 18:46:43.366783349Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/29ef3f2d52b616bfca5da5e17213798ce6542fedd2ee7b227a5528b19e9ff1de/merged/etc/passwd: no such file or directory"
	Jun 26 18:46:43 multinode-306845 crio[952]: time="2023-06-26 18:46:43.366819664Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/29ef3f2d52b616bfca5da5e17213798ce6542fedd2ee7b227a5528b19e9ff1de/merged/etc/group: no such file or directory"
	Jun 26 18:46:43 multinode-306845 crio[952]: time="2023-06-26 18:46:43.403224509Z" level=info msg="Created container f9c5a7a1524d38b67d8132022db686c182b631622e20722da51b4d4c7a366bf3: kube-system/storage-provisioner/storage-provisioner" id=1a2dc4cf-c0d9-4ea6-8dce-ded7d8313e12 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 26 18:46:43 multinode-306845 crio[952]: time="2023-06-26 18:46:43.403832462Z" level=info msg="Starting container: f9c5a7a1524d38b67d8132022db686c182b631622e20722da51b4d4c7a366bf3" id=02cc5604-5e68-4968-940c-24a744da4b09 name=/runtime.v1.RuntimeService/StartContainer
	Jun 26 18:46:43 multinode-306845 crio[952]: time="2023-06-26 18:46:43.411477309Z" level=info msg="Started container" PID=2371 containerID=f9c5a7a1524d38b67d8132022db686c182b631622e20722da51b4d4c7a366bf3 description=kube-system/storage-provisioner/storage-provisioner id=02cc5604-5e68-4968-940c-24a744da4b09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=86c83b628a0bd741ac3fe0e67f14826b0a253173bbca4331e1098946cf68aff7
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.164168648Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-cxsjd/POD" id=2a83512d-1c58-4747-9075-57f2ce5035db name=/runtime.v1.RuntimeService/RunPodSandbox
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.164252905Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.178567222Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-cxsjd Namespace:default ID:546d2bac34d2a989e353d3e0d18491f98e111c461701b4f8cd1bfc8f087c7e0e UID:42fac0b7-5d0b-40d9-ae42-4f9b17ca8dc7 NetNS:/var/run/netns/6e99b81e-6876-4b8c-a244-46b25eb09d0c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.178606191Z" level=info msg="Adding pod default_busybox-67b7f59bb-cxsjd to CNI network \"kindnet\" (type=ptp)"
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.187143280Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-cxsjd Namespace:default ID:546d2bac34d2a989e353d3e0d18491f98e111c461701b4f8cd1bfc8f087c7e0e UID:42fac0b7-5d0b-40d9-ae42-4f9b17ca8dc7 NetNS:/var/run/netns/6e99b81e-6876-4b8c-a244-46b25eb09d0c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.187264849Z" level=info msg="Checking pod default_busybox-67b7f59bb-cxsjd for CNI network kindnet (type=ptp)"
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.208509609Z" level=info msg="Ran pod sandbox 546d2bac34d2a989e353d3e0d18491f98e111c461701b4f8cd1bfc8f087c7e0e with infra container: default/busybox-67b7f59bb-cxsjd/POD" id=2a83512d-1c58-4747-9075-57f2ce5035db name=/runtime.v1.RuntimeService/RunPodSandbox
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.209893554Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=60cad567-69db-4643-a03f-1debbc960dc7 name=/runtime.v1.ImageService/ImageStatus
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.210125898Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=60cad567-69db-4643-a03f-1debbc960dc7 name=/runtime.v1.ImageService/ImageStatus
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.210858925Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=bf622f81-08ac-4a55-b1e8-0ad8882bda0e name=/runtime.v1.ImageService/PullImage
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.220699168Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jun 26 18:47:52 multinode-306845 crio[952]: time="2023-06-26 18:47:52.833395925Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jun 26 18:47:54 multinode-306845 crio[952]: time="2023-06-26 18:47:54.379545319Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=bf622f81-08ac-4a55-b1e8-0ad8882bda0e name=/runtime.v1.ImageService/PullImage
	Jun 26 18:47:54 multinode-306845 crio[952]: time="2023-06-26 18:47:54.380976054Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=dcd2d1f6-f854-4a68-b278-b1726b7a473c name=/runtime.v1.ImageService/ImageStatus
	Jun 26 18:47:54 multinode-306845 crio[952]: time="2023-06-26 18:47:54.381560364Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=dcd2d1f6-f854-4a68-b278-b1726b7a473c name=/runtime.v1.ImageService/ImageStatus
	Jun 26 18:47:54 multinode-306845 crio[952]: time="2023-06-26 18:47:54.382397505Z" level=info msg="Creating container: default/busybox-67b7f59bb-cxsjd/busybox" id=59061cca-d90b-4f1d-9474-b667a9dd5e96 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 26 18:47:54 multinode-306845 crio[952]: time="2023-06-26 18:47:54.382512927Z" level=warning msg="Allowed annotations are specified for workload []"
	Jun 26 18:47:54 multinode-306845 crio[952]: time="2023-06-26 18:47:54.473759914Z" level=info msg="Created container e28b14d439842ab6d66725e1f4e522764c5af4c7dd84b1e6dfa9640b43dcaddf: default/busybox-67b7f59bb-cxsjd/busybox" id=59061cca-d90b-4f1d-9474-b667a9dd5e96 name=/runtime.v1.RuntimeService/CreateContainer
	Jun 26 18:47:54 multinode-306845 crio[952]: time="2023-06-26 18:47:54.474536031Z" level=info msg="Starting container: e28b14d439842ab6d66725e1f4e522764c5af4c7dd84b1e6dfa9640b43dcaddf" id=f7781264-829d-415a-8ed1-fb8afee21dc9 name=/runtime.v1.RuntimeService/StartContainer
	Jun 26 18:47:54 multinode-306845 crio[952]: time="2023-06-26 18:47:54.483987390Z" level=info msg="Started container" PID=2517 containerID=e28b14d439842ab6d66725e1f4e522764c5af4c7dd84b1e6dfa9640b43dcaddf description=default/busybox-67b7f59bb-cxsjd/busybox id=f7781264-829d-415a-8ed1-fb8afee21dc9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=546d2bac34d2a989e353d3e0d18491f98e111c461701b4f8cd1bfc8f087c7e0e
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e28b14d439842       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   546d2bac34d2a       busybox-67b7f59bb-cxsjd
	f9c5a7a1524d3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   86c83b628a0bd       storage-provisioner
	26f626f43729c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   0                   54311372e41d6       coredns-5d78c9869d-d67vq
	f2175dfe83945       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      About a minute ago   Running             kindnet-cni               0                   d6a5107272686       kindnet-grd84
	f07528f2408e8       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      About a minute ago   Running             kube-proxy                0                   d8f9289e1c1d6       kube-proxy-sk9fw
	2d0d499eb4426       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      About a minute ago   Running             kube-scheduler            0                   b2b85300777a5       kube-scheduler-multinode-306845
	a08559098ef10       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      About a minute ago   Running             kube-controller-manager   0                   9db3a0bc53085       kube-controller-manager-multinode-306845
	dd5fd5120eebe       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      About a minute ago   Running             kube-apiserver            0                   b57bd2df9a43b       kube-apiserver-multinode-306845
	654706fe6a2d8       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      About a minute ago   Running             etcd                      0                   5d50d454338d9       etcd-multinode-306845
	
	* 
	* ==> coredns [26f626f43729cb2e66010774ae5d7f8c745c08f5c2dc9c352b7b1d35d0f462f5] <==
	* [INFO] 10.244.1.2:54680 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083814s
	[INFO] 10.244.0.3:49469 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010932s
	[INFO] 10.244.0.3:59267 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001581676s
	[INFO] 10.244.0.3:47406 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006037s
	[INFO] 10.244.0.3:53434 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000084022s
	[INFO] 10.244.0.3:43753 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001077567s
	[INFO] 10.244.0.3:52573 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050573s
	[INFO] 10.244.0.3:57257 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061488s
	[INFO] 10.244.0.3:46166 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003537s
	[INFO] 10.244.1.2:59823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111283s
	[INFO] 10.244.1.2:33628 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009875s
	[INFO] 10.244.1.2:36489 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060318s
	[INFO] 10.244.1.2:48328 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064467s
	[INFO] 10.244.0.3:48099 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092381s
	[INFO] 10.244.0.3:48757 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087478s
	[INFO] 10.244.0.3:45351 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059004s
	[INFO] 10.244.0.3:51726 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000042757s
	[INFO] 10.244.1.2:58495 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119374s
	[INFO] 10.244.1.2:34815 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153771s
	[INFO] 10.244.1.2:42558 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096084s
	[INFO] 10.244.1.2:58580 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074137s
	[INFO] 10.244.0.3:59421 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115508s
	[INFO] 10.244.0.3:43669 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000068084s
	[INFO] 10.244.0.3:35283 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088039s
	[INFO] 10.244.0.3:60221 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006174s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-306845
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-306845
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=759becbe25e432e7a4042c59713ee144df2072e1
	                    minikube.k8s.io/name=multinode-306845
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_26T18_46_27_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 18:46:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-306845
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 18:47:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 18:47:58 +0000   Mon, 26 Jun 2023 18:46:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 18:47:58 +0000   Mon, 26 Jun 2023 18:46:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 18:47:58 +0000   Mon, 26 Jun 2023 18:46:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 18:47:58 +0000   Mon, 26 Jun 2023 18:46:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-306845
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 817a867dba5b4bf4bf1c0dcf67191cb2
	  System UUID:                7b31ad1f-0b2d-4959-a973-b99694662877
	  Boot ID:                    4f86402f-f9e2-4c4c-a5d0-b2ea258e243c
	  Kernel Version:             5.15.0-1036-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-cxsjd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5d78c9869d-d67vq                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     78s
	  kube-system                 etcd-multinode-306845                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         91s
	  kube-system                 kindnet-grd84                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      78s
	  kube-system                 kube-apiserver-multinode-306845             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-controller-manager-multinode-306845    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-sk9fw                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-multinode-306845             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 77s                kube-proxy       
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s (x9 over 98s)  kubelet          Node multinode-306845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node multinode-306845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x7 over 98s)  kubelet          Node multinode-306845 status is now: NodeHasSufficientPID
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node multinode-306845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node multinode-306845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s                kubelet          Node multinode-306845 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           79s                node-controller  Node multinode-306845 event: Registered Node multinode-306845 in Controller
	  Normal  NodeReady                76s                kubelet          Node multinode-306845 status is now: NodeReady
	
	
	Name:               multinode-306845-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-306845-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 26 Jun 2023 18:47:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-306845-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 26 Jun 2023 18:47:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 26 Jun 2023 18:47:49 +0000   Mon, 26 Jun 2023 18:47:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 26 Jun 2023 18:47:49 +0000   Mon, 26 Jun 2023 18:47:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 26 Jun 2023 18:47:49 +0000   Mon, 26 Jun 2023 18:47:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 26 Jun 2023 18:47:49 +0000   Mon, 26 Jun 2023 18:47:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-306845-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6e4abccf597433f844899ac5a0992e1
	  System UUID:                63273130-983c-49ea-b9b0-ee1ef5b76cbe
	  Boot ID:                    4f86402f-f9e2-4c4c-a5d0-b2ea258e243c
	  Kernel Version:             5.15.0-1036-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-c5c5w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-hmt2z              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      53s
	  kube-system                 kube-proxy-lgtgc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  NodeHasSufficientMemory  53s (x5 over 55s)  kubelet          Node multinode-306845-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x5 over 55s)  kubelet          Node multinode-306845-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x5 over 55s)  kubelet          Node multinode-306845-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node multinode-306845-m02 event: Registered Node multinode-306845-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-306845-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007343] FS-Cache: O-key=[8] '99a20f0200000000'
	[  +0.004944] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.007952] FS-Cache: N-cookie d=00000000fb8647a3{9p.inode} n=00000000bc38a259
	[  +0.008720] FS-Cache: N-key=[8] '99a20f0200000000'
	[  +2.886438] FS-Cache: Duplicate cookie detected
	[  +0.004754] FS-Cache: O-cookie c=00000024 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006742] FS-Cache: O-cookie d=00000000d65110e0{9P.session} n=000000005f999d16
	[  +0.007526] FS-Cache: O-key=[10] '34323936303633383537'
	[  +0.005375] FS-Cache: N-cookie c=00000025 [p=00000002 fl=2 nc=0 na=1]
	[  +0.007946] FS-Cache: N-cookie d=00000000d65110e0{9P.session} n=0000000046707e50
	[  +0.008925] FS-Cache: N-key=[10] '34323936303633383537'
	[Jun26 18:38] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[  +0.999976] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[  +2.015766] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[  +4.063643] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[  +8.191166] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[ +16.126397] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	[Jun26 18:39] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 93 f1 8c 8e e5 36 c0 5b 6b cf 8d 08 00
	
	* 
	* ==> etcd [654706fe6a2d803bd6571694446537f56e308bc3ad1ddc7c38b56eae28f7585f] <==
	* {"level":"info","ts":"2023-06-26T18:46:21.224Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-26T18:46:21.224Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-26T18:46:21.224Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-26T18:46:21.224Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b2c6679ac05f2cf1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-06-26T18:46:21.225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-06-26T18:46:21.225Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-06-26T18:46:22.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-26T18:46:22.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-26T18:46:22.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-06-26T18:46:22.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-06-26T18:46:22.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-06-26T18:46:22.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-06-26T18:46:22.012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-06-26T18:46:22.013Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-306845 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-26T18:46:22.013Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T18:46:22.013Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-26T18:46:22.013Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T18:46:22.013Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-26T18:46:22.014Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-26T18:46:22.014Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T18:46:22.014Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T18:46:22.014Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-26T18:46:22.014Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-26T18:46:22.015Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-06-26T18:46:57.630Z","caller":"traceutil/trace.go:171","msg":"trace[2065136619] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"140.974215ms","start":"2023-06-26T18:46:57.489Z","end":"2023-06-26T18:46:57.630Z","steps":["trace[2065136619] 'process raft request'  (duration: 140.746716ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  18:47:58 up  1:30,  0 users,  load average: 0.79, 1.05, 1.21
	Linux multinode-306845 5.15.0-1036-gcp #44~20.04.1-Ubuntu SMP Fri Jun 9 10:48:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [f2175dfe8394527cd408f2b449cf75816c48dfeea5ef82eaef08e949bba9d976] <==
	* I0626 18:46:51.806635       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0626 18:46:51.806659       1 main.go:227] handling current node
	I0626 18:47:01.816179       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0626 18:47:01.816201       1 main.go:227] handling current node
	I0626 18:47:11.820599       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0626 18:47:11.820624       1 main.go:227] handling current node
	I0626 18:47:11.820633       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0626 18:47:11.820638       1 main.go:250] Node multinode-306845-m02 has CIDR [10.244.1.0/24] 
	I0626 18:47:11.820791       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0626 18:47:21.832616       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0626 18:47:21.832639       1 main.go:227] handling current node
	I0626 18:47:21.832648       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0626 18:47:21.832653       1 main.go:250] Node multinode-306845-m02 has CIDR [10.244.1.0/24] 
	I0626 18:47:31.836639       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0626 18:47:31.836662       1 main.go:227] handling current node
	I0626 18:47:31.836671       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0626 18:47:31.836675       1 main.go:250] Node multinode-306845-m02 has CIDR [10.244.1.0/24] 
	I0626 18:47:41.848310       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0626 18:47:41.848336       1 main.go:227] handling current node
	I0626 18:47:41.848345       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0626 18:47:41.848349       1 main.go:250] Node multinode-306845-m02 has CIDR [10.244.1.0/24] 
	I0626 18:47:51.856206       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0626 18:47:51.856309       1 main.go:227] handling current node
	I0626 18:47:51.856345       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0626 18:47:51.856373       1 main.go:250] Node multinode-306845-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [dd5fd5120eebed90409e095408287c8bbaed2ca611a89975e7526eb7a7daa1e2] <==
	* I0626 18:46:23.793880       1 aggregator.go:152] initial CRD sync complete...
	I0626 18:46:23.793927       1 autoregister_controller.go:141] Starting autoregister controller
	I0626 18:46:23.793955       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0626 18:46:23.793982       1 cache.go:39] Caches are synced for autoregister controller
	I0626 18:46:23.794454       1 controller.go:624] quota admission added evaluator for: namespaces
	I0626 18:46:23.800727       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0626 18:46:23.800762       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0626 18:46:23.803484       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0626 18:46:24.006495       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0626 18:46:24.454033       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0626 18:46:24.688181       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0626 18:46:24.692852       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0626 18:46:24.692887       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0626 18:46:25.086524       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0626 18:46:25.137902       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0626 18:46:25.311865       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0626 18:46:25.317108       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0626 18:46:25.318019       1 controller.go:624] quota admission added evaluator for: endpoints
	I0626 18:46:25.321824       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0626 18:46:25.818676       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0626 18:46:26.903215       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0626 18:46:26.914106       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0626 18:46:26.921824       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0626 18:46:40.503261       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0626 18:46:40.618499       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [a08559098ef107c4a22eaed2639348a5deed32df886c8e6ceb84989bb4830f20] <==
	* I0626 18:46:39.891183       1 shared_informer.go:318] Caches are synced for attach detach
	I0626 18:46:39.891285       1 shared_informer.go:318] Caches are synced for GC
	I0626 18:46:39.892374       1 shared_informer.go:318] Caches are synced for stateful set
	I0626 18:46:39.897148       1 shared_informer.go:318] Caches are synced for resource quota
	I0626 18:46:40.392183       1 shared_informer.go:318] Caches are synced for garbage collector
	I0626 18:46:40.392308       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0626 18:46:40.392228       1 shared_informer.go:318] Caches are synced for garbage collector
	I0626 18:46:40.512085       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sk9fw"
	I0626 18:46:40.517347       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-grd84"
	I0626 18:46:40.696252       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0626 18:46:40.797537       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0626 18:46:40.812795       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-w95w5"
	I0626 18:46:40.896117       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-d67vq"
	I0626 18:46:40.995981       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-w95w5"
	I0626 18:46:44.854303       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0626 18:47:05.076087       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-306845-m02\" does not exist"
	I0626 18:47:05.083036       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-306845-m02" podCIDRs=[10.244.1.0/24]
	I0626 18:47:05.085595       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lgtgc"
	I0626 18:47:05.087957       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hmt2z"
	I0626 18:47:09.857317       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-306845-m02"
	I0626 18:47:09.857367       1 event.go:307] "Event occurred" object="multinode-306845-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-306845-m02 event: Registered Node multinode-306845-m02 in Controller"
	W0626 18:47:49.415976       1 topologycache.go:232] Can't get CPU or zone information for multinode-306845-m02 node
	I0626 18:47:51.842770       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0626 18:47:51.849048       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-c5c5w"
	I0626 18:47:51.855988       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-cxsjd"
	
	* 
	* ==> kube-proxy [f07528f2408e808606df1d51084ca5679a707f7f7f1ef907574d22aec5280905] <==
	* I0626 18:46:41.217893       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0626 18:46:41.217995       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0626 18:46:41.218030       1 server_others.go:554] "Using iptables proxy"
	I0626 18:46:41.239792       1 server_others.go:192] "Using iptables Proxier"
	I0626 18:46:41.239846       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0626 18:46:41.239854       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0626 18:46:41.239869       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0626 18:46:41.239901       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0626 18:46:41.240510       1 server.go:658] "Version info" version="v1.27.3"
	I0626 18:46:41.240525       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0626 18:46:41.241288       1 config.go:188] "Starting service config controller"
	I0626 18:46:41.241308       1 config.go:315] "Starting node config controller"
	I0626 18:46:41.241320       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0626 18:46:41.241320       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0626 18:46:41.241589       1 config.go:97] "Starting endpoint slice config controller"
	I0626 18:46:41.241601       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0626 18:46:41.341920       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0626 18:46:41.341957       1 shared_informer.go:318] Caches are synced for service config
	I0626 18:46:41.342026       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2d0d499eb442610221b05e46a01ae08dc9ad72e6ef85668ce3f32cd7e4e12d19] <==
	* W0626 18:46:23.898040       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0626 18:46:23.898066       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0626 18:46:23.898140       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0626 18:46:23.898147       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 18:46:23.898155       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0626 18:46:23.898166       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0626 18:46:23.898239       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0626 18:46:23.898260       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0626 18:46:23.898472       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 18:46:23.898499       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0626 18:46:24.716267       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0626 18:46:24.716309       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0626 18:46:24.793879       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0626 18:46:24.793911       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0626 18:46:24.853132       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0626 18:46:24.853163       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0626 18:46:24.867485       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0626 18:46:24.867518       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0626 18:46:24.873848       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0626 18:46:24.873882       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0626 18:46:24.941926       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0626 18:46:24.941968       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0626 18:46:24.951449       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0626 18:46:24.951488       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0626 18:46:27.616789       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jun 26 18:46:40 multinode-306845 kubelet[1592]: I0626 18:46:40.699194    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b17ff684-f726-4e3f-9e2e-2270a77f0712-xtables-lock\") pod \"kube-proxy-sk9fw\" (UID: \"b17ff684-f726-4e3f-9e2e-2270a77f0712\") " pod="kube-system/kube-proxy-sk9fw"
	Jun 26 18:46:40 multinode-306845 kubelet[1592]: I0626 18:46:40.699234    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t2pr\" (UniqueName: \"kubernetes.io/projected/b17ff684-f726-4e3f-9e2e-2270a77f0712-kube-api-access-7t2pr\") pod \"kube-proxy-sk9fw\" (UID: \"b17ff684-f726-4e3f-9e2e-2270a77f0712\") " pod="kube-system/kube-proxy-sk9fw"
	Jun 26 18:46:40 multinode-306845 kubelet[1592]: I0626 18:46:40.800222    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df4468e3-83cd-4131-8236-f57f2ceab981-lib-modules\") pod \"kindnet-grd84\" (UID: \"df4468e3-83cd-4131-8236-f57f2ceab981\") " pod="kube-system/kindnet-grd84"
	Jun 26 18:46:40 multinode-306845 kubelet[1592]: I0626 18:46:40.800286    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t267\" (UniqueName: \"kubernetes.io/projected/df4468e3-83cd-4131-8236-f57f2ceab981-kube-api-access-4t267\") pod \"kindnet-grd84\" (UID: \"df4468e3-83cd-4131-8236-f57f2ceab981\") " pod="kube-system/kindnet-grd84"
	Jun 26 18:46:40 multinode-306845 kubelet[1592]: I0626 18:46:40.800498    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/df4468e3-83cd-4131-8236-f57f2ceab981-cni-cfg\") pod \"kindnet-grd84\" (UID: \"df4468e3-83cd-4131-8236-f57f2ceab981\") " pod="kube-system/kindnet-grd84"
	Jun 26 18:46:40 multinode-306845 kubelet[1592]: I0626 18:46:40.800545    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df4468e3-83cd-4131-8236-f57f2ceab981-xtables-lock\") pod \"kindnet-grd84\" (UID: \"df4468e3-83cd-4131-8236-f57f2ceab981\") " pod="kube-system/kindnet-grd84"
	Jun 26 18:46:40 multinode-306845 kubelet[1592]: W0626 18:46:40.993189    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2/crio-d8f9289e1c1d6e2890176f4e22b0d06304890b6f786cba3f37fa68738e723a78 WatchSource:0}: Error finding container d8f9289e1c1d6e2890176f4e22b0d06304890b6f786cba3f37fa68738e723a78: Status 404 returned error can't find the container with id d8f9289e1c1d6e2890176f4e22b0d06304890b6f786cba3f37fa68738e723a78
	Jun 26 18:46:41 multinode-306845 kubelet[1592]: W0626 18:46:41.249700    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2/crio-d6a51072726864dd643c4142743942ad8f35e9c12ab55c0e9381154af31dcb9b WatchSource:0}: Error finding container d6a51072726864dd643c4142743942ad8f35e9c12ab55c0e9381154af31dcb9b: Status 404 returned error can't find the container with id d6a51072726864dd643c4142743942ad8f35e9c12ab55c0e9381154af31dcb9b
	Jun 26 18:46:42 multinode-306845 kubelet[1592]: I0626 18:46:42.128257    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-sk9fw" podStartSLOduration=2.128208255 podCreationTimestamp="2023-06-26 18:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-26 18:46:42.108622402 +0000 UTC m=+15.229450282" watchObservedRunningTime="2023-06-26 18:46:42.128208255 +0000 UTC m=+15.249036133"
	Jun 26 18:46:42 multinode-306845 kubelet[1592]: I0626 18:46:42.192645    1592 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jun 26 18:46:42 multinode-306845 kubelet[1592]: I0626 18:46:42.213224    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-grd84" podStartSLOduration=2.213176798 podCreationTimestamp="2023-06-26 18:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-26 18:46:42.128050794 +0000 UTC m=+15.248878675" watchObservedRunningTime="2023-06-26 18:46:42.213176798 +0000 UTC m=+15.334004672"
	Jun 26 18:46:42 multinode-306845 kubelet[1592]: I0626 18:46:42.213633    1592 topology_manager.go:212] "Topology Admit Handler"
	Jun 26 18:46:42 multinode-306845 kubelet[1592]: I0626 18:46:42.412755    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/631de45d-1e2e-45b8-bdb0-1220d4b68aef-config-volume\") pod \"coredns-5d78c9869d-d67vq\" (UID: \"631de45d-1e2e-45b8-bdb0-1220d4b68aef\") " pod="kube-system/coredns-5d78c9869d-d67vq"
	Jun 26 18:46:42 multinode-306845 kubelet[1592]: I0626 18:46:42.412812    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f2tf\" (UniqueName: \"kubernetes.io/projected/631de45d-1e2e-45b8-bdb0-1220d4b68aef-kube-api-access-4f2tf\") pod \"coredns-5d78c9869d-d67vq\" (UID: \"631de45d-1e2e-45b8-bdb0-1220d4b68aef\") " pod="kube-system/coredns-5d78c9869d-d67vq"
	Jun 26 18:46:42 multinode-306845 kubelet[1592]: W0626 18:46:42.861640    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2/crio-54311372e41d682f554cb50e5aacd8fe5ff5f295cefe4bfd8801b347885b505a WatchSource:0}: Error finding container 54311372e41d682f554cb50e5aacd8fe5ff5f295cefe4bfd8801b347885b505a: Status 404 returned error can't find the container with id 54311372e41d682f554cb50e5aacd8fe5ff5f295cefe4bfd8801b347885b505a
	Jun 26 18:46:43 multinode-306845 kubelet[1592]: I0626 18:46:43.013134    1592 topology_manager.go:212] "Topology Admit Handler"
	Jun 26 18:46:43 multinode-306845 kubelet[1592]: I0626 18:46:43.114459    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-d67vq" podStartSLOduration=3.114405772 podCreationTimestamp="2023-06-26 18:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-26 18:46:43.113941153 +0000 UTC m=+16.234769037" watchObservedRunningTime="2023-06-26 18:46:43.114405772 +0000 UTC m=+16.235233652"
	Jun 26 18:46:43 multinode-306845 kubelet[1592]: I0626 18:46:43.116761    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4sk\" (UniqueName: \"kubernetes.io/projected/61d6f823-d74e-4855-a446-94018e9ddcd8-kube-api-access-lx4sk\") pod \"storage-provisioner\" (UID: \"61d6f823-d74e-4855-a446-94018e9ddcd8\") " pod="kube-system/storage-provisioner"
	Jun 26 18:46:43 multinode-306845 kubelet[1592]: I0626 18:46:43.116828    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/61d6f823-d74e-4855-a446-94018e9ddcd8-tmp\") pod \"storage-provisioner\" (UID: \"61d6f823-d74e-4855-a446-94018e9ddcd8\") " pod="kube-system/storage-provisioner"
	Jun 26 18:46:43 multinode-306845 kubelet[1592]: W0626 18:46:43.353573    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2/crio-86c83b628a0bd741ac3fe0e67f14826b0a253173bbca4331e1098946cf68aff7 WatchSource:0}: Error finding container 86c83b628a0bd741ac3fe0e67f14826b0a253173bbca4331e1098946cf68aff7: Status 404 returned error can't find the container with id 86c83b628a0bd741ac3fe0e67f14826b0a253173bbca4331e1098946cf68aff7
	Jun 26 18:46:44 multinode-306845 kubelet[1592]: I0626 18:46:44.115592    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.115538084 podCreationTimestamp="2023-06-26 18:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-26 18:46:44.115128264 +0000 UTC m=+17.235956144" watchObservedRunningTime="2023-06-26 18:46:44.115538084 +0000 UTC m=+17.236365972"
	Jun 26 18:47:51 multinode-306845 kubelet[1592]: I0626 18:47:51.861935    1592 topology_manager.go:212] "Topology Admit Handler"
	Jun 26 18:47:51 multinode-306845 kubelet[1592]: I0626 18:47:51.987501    1592 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghpp5\" (UniqueName: \"kubernetes.io/projected/42fac0b7-5d0b-40d9-ae42-4f9b17ca8dc7-kube-api-access-ghpp5\") pod \"busybox-67b7f59bb-cxsjd\" (UID: \"42fac0b7-5d0b-40d9-ae42-4f9b17ca8dc7\") " pod="default/busybox-67b7f59bb-cxsjd"
	Jun 26 18:47:52 multinode-306845 kubelet[1592]: W0626 18:47:52.205814    1592 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2/crio-546d2bac34d2a989e353d3e0d18491f98e111c461701b4f8cd1bfc8f087c7e0e WatchSource:0}: Error finding container 546d2bac34d2a989e353d3e0d18491f98e111c461701b4f8cd1bfc8f087c7e0e: Status 404 returned error can't find the container with id 546d2bac34d2a989e353d3e0d18491f98e111c461701b4f8cd1bfc8f087c7e0e
	Jun 26 18:47:55 multinode-306845 kubelet[1592]: I0626 18:47:55.253293    1592 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-cxsjd" podStartSLOduration=2.083421731 podCreationTimestamp="2023-06-26 18:47:51 +0000 UTC" firstStartedPulling="2023-06-26 18:47:52.21031552 +0000 UTC m=+85.331143388" lastFinishedPulling="2023-06-26 18:47:54.380145866 +0000 UTC m=+87.500973735" observedRunningTime="2023-06-26 18:47:55.252763405 +0000 UTC m=+88.373591284" watchObservedRunningTime="2023-06-26 18:47:55.253252078 +0000 UTC m=+88.374079955"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-306845 -n multinode-306845
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-306845 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.9.0.4076958805.exe start -p running-upgrade-455984 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.9.0.4076958805.exe start -p running-upgrade-455984 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m2.2193585s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-455984 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0626 19:00:10.381210  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-455984 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.336863772s)

                                                
                                                
-- stdout --
	* [running-upgrade-455984] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-455984 in cluster running-upgrade-455984
	* Pulling base image ...
	* Updating the running docker "running-upgrade-455984" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 19:00:09.181161  510174 out.go:296] Setting OutFile to fd 1 ...
	I0626 19:00:09.181296  510174 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:00:09.181305  510174 out.go:309] Setting ErrFile to fd 2...
	I0626 19:00:09.181310  510174 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 19:00:09.181456  510174 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	I0626 19:00:09.181994  510174 out.go:303] Setting JSON to false
	I0626 19:00:09.183508  510174 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6159,"bootTime":1687799850,"procs":750,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 19:00:09.183570  510174 start.go:137] virtualization: kvm guest
	I0626 19:00:09.186162  510174 out.go:177] * [running-upgrade-455984] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 19:00:09.188296  510174 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 19:00:09.189822  510174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 19:00:09.188313  510174 notify.go:220] Checking for updates...
	I0626 19:00:09.192662  510174 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 19:00:09.194286  510174 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	I0626 19:00:09.195656  510174 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 19:00:09.197134  510174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 19:00:09.199086  510174 config.go:182] Loaded profile config "running-upgrade-455984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0626 19:00:09.199111  510174 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953
	I0626 19:00:09.200834  510174 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0626 19:00:09.202090  510174 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 19:00:09.226653  510174 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0626 19:00:09.226760  510174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 19:00:09.281376  510174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:75 SystemTime:2023-06-26 19:00:09.272471881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 19:00:09.281488  510174 docker.go:294] overlay module found
	I0626 19:00:09.283539  510174 out.go:177] * Using the docker driver based on existing profile
	I0626 19:00:09.284917  510174 start.go:297] selected driver: docker
	I0626 19:00:09.284934  510174 start.go:954] validating driver "docker" against &{Name:running-upgrade-455984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-455984 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:00:09.285043  510174 start.go:965] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 19:00:09.285857  510174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 19:00:09.340899  510174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:75 SystemTime:2023-06-26 19:00:09.331521654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 19:00:09.341227  510174 cni.go:84] Creating CNI manager for ""
	I0626 19:00:09.341249  510174 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0626 19:00:09.341258  510174 start_flags.go:319] config:
	{Name:running-upgrade-455984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-455984 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 19:00:09.343226  510174 out.go:177] * Starting control plane node running-upgrade-455984 in cluster running-upgrade-455984
	I0626 19:00:09.344622  510174 cache.go:122] Beginning downloading kic base image for docker with crio
	I0626 19:00:09.346160  510174 out.go:177] * Pulling base image ...
	I0626 19:00:09.347537  510174 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0626 19:00:09.347664  510174 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon
	I0626 19:00:09.366663  510174 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon, skipping pull
	I0626 19:00:09.366689  510174 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 exists in daemon, skipping load
	W0626 19:00:09.452548  510174 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0626 19:00:09.452797  510174 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/running-upgrade-455984/config.json ...
	I0626 19:00:09.452825  510174 cache.go:107] acquiring lock: {Name:mke8b17981d21cb7af9bc9c5782e04226d61a193 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:00:09.452833  510174 cache.go:107] acquiring lock: {Name:mk55efbcf942fe910700a7ba38fa26c80218e958 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:00:09.452916  510174 cache.go:107] acquiring lock: {Name:mk4d3b9583426caa24bdd0bf6a959afe9f458d91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:00:09.452981  510174 cache.go:115] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0626 19:00:09.452992  510174 cache.go:115] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0626 19:00:09.452994  510174 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 177.111µs
	I0626 19:00:09.452966  510174 cache.go:107] acquiring lock: {Name:mk98ff9f0b19b3086d7fd8d891babcc95e005586 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:00:09.453010  510174 cache.go:115] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0626 19:00:09.453017  510174 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0626 19:00:09.453012  510174 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 202.333µs
	I0626 19:00:09.452854  510174 cache.go:107] acquiring lock: {Name:mk0397d965390f73dea451f89d5d69476a1aa883 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:00:09.453021  510174 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 108.515µs
	I0626 19:00:09.453034  510174 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0626 19:00:09.453025  510174 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0626 19:00:09.453024  510174 cache.go:107] acquiring lock: {Name:mkef1417a175503f89bb8e4759f561a386a24c45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:00:09.453149  510174 cache.go:115] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0626 19:00:09.453108  510174 cache.go:107] acquiring lock: {Name:mk2fa785278803b3431f7cef181bb9ea6a6f7c70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:00:09.453153  510174 cache.go:115] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0626 19:00:09.453150  510174 cache.go:107] acquiring lock: {Name:mkc5c04fbf4b5ae9256c2169bed9f6c0ee26e88a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:00:09.453175  510174 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 242.348µs
	I0626 19:00:09.453190  510174 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0626 19:00:09.453159  510174 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 318.17µs
	I0626 19:00:09.453204  510174 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0626 19:00:09.453148  510174 cache.go:195] Successfully downloaded all kic artifacts
	I0626 19:00:09.453223  510174 cache.go:115] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0626 19:00:09.453233  510174 start.go:365] acquiring machines lock for running-upgrade-455984: {Name:mk43ca68edff792ee8aa8225f8622d5b6c5f3c5d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 19:00:09.453234  510174 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 115.855µs
	I0626 19:00:09.453147  510174 cache.go:115] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0626 19:00:09.453251  510174 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0626 19:00:09.453259  510174 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 271.163µs
	I0626 19:00:09.453270  510174 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0626 19:00:09.453271  510174 cache.go:115] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0626 19:00:09.453283  510174 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 230.902µs
	I0626 19:00:09.453298  510174 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0626 19:00:09.453303  510174 start.go:369] acquired machines lock for "running-upgrade-455984" in 56.199µs
	I0626 19:00:09.453306  510174 cache.go:87] Successfully saved all images to host disk.
	I0626 19:00:09.453317  510174 start.go:96] Skipping create...Using existing machine configuration
	I0626 19:00:09.453323  510174 fix.go:54] fixHost starting: m01
	I0626 19:00:09.453599  510174 cli_runner.go:164] Run: docker container inspect running-upgrade-455984 --format={{.State.Status}}
	I0626 19:00:09.475041  510174 fix.go:102] recreateIfNeeded on running-upgrade-455984: state=Running err=<nil>
	W0626 19:00:09.475083  510174 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 19:00:09.477366  510174 out.go:177] * Updating the running docker "running-upgrade-455984" container ...
	I0626 19:00:09.478997  510174 machine.go:88] provisioning docker machine ...
	I0626 19:00:09.479041  510174 ubuntu.go:169] provisioning hostname "running-upgrade-455984"
	I0626 19:00:09.479120  510174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-455984
	I0626 19:00:09.499629  510174 main.go:141] libmachine: Using SSH client type: native
	I0626 19:00:09.500073  510174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33264 <nil> <nil>}
	I0626 19:00:09.500088  510174 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-455984 && echo "running-upgrade-455984" | sudo tee /etc/hostname
	I0626 19:00:09.623917  510174 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-455984
	
	I0626 19:00:09.624017  510174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-455984
	I0626 19:00:09.641480  510174 main.go:141] libmachine: Using SSH client type: native
	I0626 19:00:09.642069  510174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33264 <nil> <nil>}
	I0626 19:00:09.642097  510174 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-455984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-455984/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-455984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 19:00:09.748782  510174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 19:00:09.748813  510174 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16761-330054/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-330054/.minikube}
	I0626 19:00:09.748845  510174 ubuntu.go:177] setting up certificates
	I0626 19:00:09.748887  510174 provision.go:83] configureAuth start
	I0626 19:00:09.748960  510174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-455984
	I0626 19:00:09.766990  510174 provision.go:138] copyHostCerts
	I0626 19:00:09.767075  510174 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem, removing ...
	I0626 19:00:09.767093  510174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem
	I0626 19:00:09.767164  510174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem (1082 bytes)
	I0626 19:00:09.767294  510174 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem, removing ...
	I0626 19:00:09.767308  510174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem
	I0626 19:00:09.767348  510174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem (1123 bytes)
	I0626 19:00:09.767422  510174 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem, removing ...
	I0626 19:00:09.767432  510174 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem
	I0626 19:00:09.767468  510174 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem (1679 bytes)
	I0626 19:00:09.767542  510174 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-455984 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-455984]
	I0626 19:00:09.954104  510174 provision.go:172] copyRemoteCerts
	I0626 19:00:09.954166  510174 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 19:00:09.954202  510174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-455984
	I0626 19:00:09.972113  510174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33264 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/running-upgrade-455984/id_rsa Username:docker}
	I0626 19:00:10.056941  510174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0626 19:00:10.076142  510174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0626 19:00:10.093555  510174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 19:00:10.110682  510174 provision.go:86] duration metric: configureAuth took 361.772195ms
	I0626 19:00:10.110715  510174 ubuntu.go:193] setting minikube options for container-runtime
	I0626 19:00:10.110964  510174 config.go:182] Loaded profile config "running-upgrade-455984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0626 19:00:10.111111  510174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-455984
	I0626 19:00:10.133051  510174 main.go:141] libmachine: Using SSH client type: native
	I0626 19:00:10.133693  510174 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33264 <nil> <nil>}
	I0626 19:00:10.133728  510174 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 19:00:11.026894  510174 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 19:00:11.026927  510174 machine.go:91] provisioned docker machine in 1.547908314s
	I0626 19:00:11.026939  510174 start.go:300] post-start starting for "running-upgrade-455984" (driver="docker")
	I0626 19:00:11.026952  510174 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 19:00:11.027015  510174 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 19:00:11.027070  510174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-455984
	I0626 19:00:11.043771  510174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33264 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/running-upgrade-455984/id_rsa Username:docker}
	I0626 19:00:11.128505  510174 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 19:00:11.131462  510174 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0626 19:00:11.131492  510174 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0626 19:00:11.131505  510174 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0626 19:00:11.131512  510174 info.go:137] Remote host: Ubuntu 19.10
	I0626 19:00:11.131523  510174 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/addons for local assets ...
	I0626 19:00:11.131577  510174 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/files for local assets ...
	I0626 19:00:11.131651  510174 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem -> 3369352.pem in /etc/ssl/certs
	I0626 19:00:11.131739  510174 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 19:00:11.138393  510174 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem --> /etc/ssl/certs/3369352.pem (1708 bytes)
	I0626 19:00:11.155184  510174 start.go:303] post-start completed in 128.226998ms
	I0626 19:00:11.155271  510174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0626 19:00:11.155329  510174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-455984
	I0626 19:00:11.171963  510174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33264 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/running-upgrade-455984/id_rsa Username:docker}
	I0626 19:00:11.249395  510174 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0626 19:00:11.253332  510174 fix.go:56] fixHost completed within 1.799998153s
	I0626 19:00:11.253364  510174 start.go:83] releasing machines lock for "running-upgrade-455984", held for 1.800050878s
	I0626 19:00:11.253437  510174 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-455984
	I0626 19:00:11.270409  510174 ssh_runner.go:195] Run: cat /version.json
	I0626 19:00:11.270468  510174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-455984
	I0626 19:00:11.270468  510174 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 19:00:11.270660  510174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-455984
	I0626 19:00:11.289064  510174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33264 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/running-upgrade-455984/id_rsa Username:docker}
	I0626 19:00:11.289078  510174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33264 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/running-upgrade-455984/id_rsa Username:docker}
	W0626 19:00:11.364060  510174 start.go:493] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0626 19:00:11.364147  510174 ssh_runner.go:195] Run: systemctl --version
	I0626 19:00:11.415916  510174 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 19:00:11.467672  510174 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0626 19:00:11.471788  510174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 19:00:11.548100  510174 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0626 19:00:11.548182  510174 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 19:00:11.714017  510174 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 19:00:11.714041  510174 start.go:466] detecting cgroup driver to use...
	I0626 19:00:11.714079  510174 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0626 19:00:11.714132  510174 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 19:00:11.736311  510174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 19:00:11.746685  510174 docker.go:196] disabling cri-docker service (if available) ...
	I0626 19:00:11.746733  510174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 19:00:11.755603  510174 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 19:00:11.764309  510174 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0626 19:00:11.773180  510174 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0626 19:00:11.773242  510174 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 19:00:11.862664  510174 docker.go:212] disabling docker service ...
	I0626 19:00:11.862761  510174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 19:00:11.872828  510174 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 19:00:11.882559  510174 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 19:00:11.963034  510174 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 19:00:12.104766  510174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 19:00:12.114971  510174 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 19:00:12.129879  510174 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0626 19:00:12.129941  510174 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 19:00:12.255755  510174 out.go:177] 
	W0626 19:00:12.320045  510174 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0626 19:00:12.320073  510174 out.go:239] * 
	* 
	W0626 19:00:12.320990  510174 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0626 19:00:12.399928  510174 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-455984 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-06-26 19:00:12.485879943 +0000 UTC m=+2069.080227881
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-455984
helpers_test.go:235: (dbg) docker inspect running-upgrade-455984:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a67eb615b8c0cf12203eb12fc4cd2509eca71f4f24b3d1bb2d0e5fea980158c8",
	        "Created": "2023-06-26T18:59:07.256630479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496745,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-06-26T18:59:07.797177617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/a67eb615b8c0cf12203eb12fc4cd2509eca71f4f24b3d1bb2d0e5fea980158c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a67eb615b8c0cf12203eb12fc4cd2509eca71f4f24b3d1bb2d0e5fea980158c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/a67eb615b8c0cf12203eb12fc4cd2509eca71f4f24b3d1bb2d0e5fea980158c8/hosts",
	        "LogPath": "/var/lib/docker/containers/a67eb615b8c0cf12203eb12fc4cd2509eca71f4f24b3d1bb2d0e5fea980158c8/a67eb615b8c0cf12203eb12fc4cd2509eca71f4f24b3d1bb2d0e5fea980158c8-json.log",
	        "Name": "/running-upgrade-455984",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-455984:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/507056a6906e94f168b3a01ce85d449f121a2109a0b9451b57f56356f946c34c-init/diff:/var/lib/docker/overlay2/afd7061581402f598839d6be47fd119c32f2652f45cb7ed92b89ba129e8088a9/diff:/var/lib/docker/overlay2/fa4a691b7b14158cc174aa63aa8e58a8425c9b706adfd55d75c680a11963d10a/diff:/var/lib/docker/overlay2/cfda4bd0cba493ff69755b0e45b75412bdc44b95f325964d877ac4a73b176008/diff:/var/lib/docker/overlay2/b9b6e4e2a536f5039710da30699acf151ff2f0a20fdeb1d56b9f690ed5d2e6db/diff:/var/lib/docker/overlay2/8922d02036362a990ef9f186a82f46757d1729002d4633ca2a1f13b72ed2b0c9/diff:/var/lib/docker/overlay2/0bf3aec2110923c34f931f80a774b280175efc7f0db0e415a08bbaae7653936b/diff:/var/lib/docker/overlay2/3cbeb59fb56c38875eac891715d0c0cb773b6f87cb6fa19509255f8efc77d745/diff:/var/lib/docker/overlay2/9358f599896dc1235659df051b33466e4f58fb247590c9e8189fa24c03f9a368/diff:/var/lib/docker/overlay2/5fcc8883d11ad191ea90ad2f2b91eb5c2208d25435c5c74958fcc01d1f52ba47/diff:/var/lib/docker/overlay2/fd3e5a
0cfe6fccaaf3e01daad673c3cf47f190b42f7fcd36009617dae7241c65/diff:/var/lib/docker/overlay2/4216aaa3180f7fcdc13cfdf46e37cb3bd481198864e98201e3a86a3327095291/diff:/var/lib/docker/overlay2/9cfe7ed2151065a63bdc2876b0ed704cfa93f99518961055b4c2be4dee399af7/diff:/var/lib/docker/overlay2/af56f9b70848bfd7a9e8e6f135aebd487fde95d80e63a3122311d50d293eb2d8/diff:/var/lib/docker/overlay2/f5421339f43e2c8148abed68340c653cba909163e928a67a46f8511777f74214/diff:/var/lib/docker/overlay2/56dbde9782eb76e21e8368f994b65b84ac6831f547c067111da2e9ec19096e08/diff:/var/lib/docker/overlay2/d1fb19a78095f2aaf0b7a67a3fc8a0c01aac61b07118ce7c40f322149f176cac/diff:/var/lib/docker/overlay2/ec21da7548a03e1b78bf006cbd931a4554eff64e8a2e257d651532e199e6359a/diff:/var/lib/docker/overlay2/32b43767a37c5b11b64b7b0814ff9324c0a59e7584f9bd54e3b62b2a7f21318c/diff:/var/lib/docker/overlay2/7c786495284dc4088843736f56cc7444e3e80fdcd4e8233a8d24dbebebeef706/diff:/var/lib/docker/overlay2/91ae7fca51a5363f619e4c715200b1f2231b16f6a27ad35dff5dc533b9d583e5/diff:/var/lib/d
ocker/overlay2/f6a88964500422d6febae3f6bcb97a7eed1ba48b5770beaaaa46eeb505c0b4db/diff",
	                "MergedDir": "/var/lib/docker/overlay2/507056a6906e94f168b3a01ce85d449f121a2109a0b9451b57f56356f946c34c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/507056a6906e94f168b3a01ce85d449f121a2109a0b9451b57f56356f946c34c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/507056a6906e94f168b3a01ce85d449f121a2109a0b9451b57f56356f946c34c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-455984",
	                "Source": "/var/lib/docker/volumes/running-upgrade-455984/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-455984",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-455984",
	                "name.minikube.sigs.k8s.io": "running-upgrade-455984",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "711b2c168aa3b121bfd792209466f9403475b19ad425664022dfad66b85013f2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33264"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33263"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33262"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/711b2c168aa3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "73218ec4364b2b39d883930d429f0c910186de2e17dd74a2cbc8473d771fa4e0",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "500ebbf547848ab4cba897475eb0ec50f42937ba16ca1ebd4c45f8873d0c56e2",
	                    "EndpointID": "73218ec4364b2b39d883930d429f0c910186de2e17dd74a2cbc8473d771fa4e0",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-455984 -n running-upgrade-455984
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-455984 -n running-upgrade-455984: exit status 4 (279.455534ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 19:00:12.754993  510997 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-455984" does not appear in /home/jenkins/minikube-integration/16761-330054/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-455984" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-455984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-455984
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-455984: (1.841128071s)
--- FAIL: TestRunningBinaryUpgrade (69.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (111.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.3698418868.exe start -p stopped-upgrade-735296 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.3698418868.exe start -p stopped-upgrade-735296 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m35.1776145s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.9.0.3698418868.exe -p stopped-upgrade-735296 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.9.0.3698418868.exe -p stopped-upgrade-735296 stop: (10.899133868s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-735296 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-735296 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.832493309s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-735296] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-735296 in cluster stopped-upgrade-735296
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-735296" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 18:59:00.237996  494291 out.go:296] Setting OutFile to fd 1 ...
	I0626 18:59:00.238174  494291 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:59:00.238188  494291 out.go:309] Setting ErrFile to fd 2...
	I0626 18:59:00.238196  494291 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:59:00.238375  494291 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	I0626 18:59:00.239236  494291 out.go:303] Setting JSON to false
	I0626 18:59:00.241085  494291 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6090,"bootTime":1687799850,"procs":696,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 18:59:00.241155  494291 start.go:137] virtualization: kvm guest
	I0626 18:59:00.243485  494291 out.go:177] * [stopped-upgrade-735296] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 18:59:00.245635  494291 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 18:59:00.245604  494291 notify.go:220] Checking for updates...
	I0626 18:59:00.247086  494291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 18:59:00.248569  494291 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:59:00.250258  494291 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	I0626 18:59:00.251617  494291 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 18:59:00.252967  494291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 18:59:00.254613  494291 config.go:182] Loaded profile config "stopped-upgrade-735296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0626 18:59:00.254650  494291 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953
	I0626 18:59:00.256372  494291 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0626 18:59:00.257908  494291 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 18:59:00.280772  494291 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0626 18:59:00.280890  494291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:59:00.350609  494291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:65 SystemTime:2023-06-26 18:59:00.340410561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:59:00.350728  494291 docker.go:294] overlay module found
	I0626 18:59:00.352769  494291 out.go:177] * Using the docker driver based on existing profile
	I0626 18:59:00.354159  494291 start.go:297] selected driver: docker
	I0626 18:59:00.354178  494291 start.go:954] validating driver "docker" against &{Name:stopped-upgrade-735296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-735296 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:59:00.354302  494291 start.go:965] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 18:59:00.355232  494291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:59:00.408942  494291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:65 SystemTime:2023-06-26 18:59:00.398349339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:59:00.409332  494291 cni.go:84] Creating CNI manager for ""
	I0626 18:59:00.409348  494291 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0626 18:59:00.409358  494291 start_flags.go:319] config:
	{Name:stopped-upgrade-735296 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-735296 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:59:00.411038  494291 out.go:177] * Starting control plane node stopped-upgrade-735296 in cluster stopped-upgrade-735296
	I0626 18:59:00.413089  494291 cache.go:122] Beginning downloading kic base image for docker with crio
	I0626 18:59:00.414383  494291 out.go:177] * Pulling base image ...
	I0626 18:59:00.415845  494291 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0626 18:59:00.415920  494291 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon
	I0626 18:59:00.433345  494291 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon, skipping pull
	I0626 18:59:00.433371  494291 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 exists in daemon, skipping load
	W0626 18:59:00.521603  494291 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0626 18:59:00.521777  494291 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/stopped-upgrade-735296/config.json ...
	I0626 18:59:00.521875  494291 cache.go:107] acquiring lock: {Name:mk55efbcf942fe910700a7ba38fa26c80218e958 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:59:00.521883  494291 cache.go:107] acquiring lock: {Name:mkc5c04fbf4b5ae9256c2169bed9f6c0ee26e88a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:59:00.521929  494291 cache.go:107] acquiring lock: {Name:mk98ff9f0b19b3086d7fd8d891babcc95e005586 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:59:00.521939  494291 cache.go:107] acquiring lock: {Name:mke8b17981d21cb7af9bc9c5782e04226d61a193 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:59:00.521959  494291 cache.go:107] acquiring lock: {Name:mk2fa785278803b3431f7cef181bb9ea6a6f7c70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:59:00.521891  494291 cache.go:107] acquiring lock: {Name:mkef1417a175503f89bb8e4759f561a386a24c45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:59:00.521946  494291 cache.go:107] acquiring lock: {Name:mk0397d965390f73dea451f89d5d69476a1aa883 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:59:00.522005  494291 cache.go:107] acquiring lock: {Name:mk4d3b9583426caa24bdd0bf6a959afe9f458d91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:59:00.522074  494291 cache.go:195] Successfully downloaded all kic artifacts
	I0626 18:59:00.522076  494291 cache.go:115] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0626 18:59:00.522091  494291 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 226.145µs
	I0626 18:59:00.522096  494291 start.go:365] acquiring machines lock for stopped-upgrade-735296: {Name:mk5de21c75e1b2a78fe9794efef1bbe49991018c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0626 18:59:00.522112  494291 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0626 18:59:00.522138  494291 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0626 18:59:00.522161  494291 start.go:369] acquired machines lock for "stopped-upgrade-735296" in 54.866µs
	I0626 18:59:00.522176  494291 start.go:96] Skipping create...Using existing machine configuration
	I0626 18:59:00.522177  494291 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I0626 18:59:00.522183  494291 fix.go:54] fixHost starting: m01
	I0626 18:59:00.522190  494291 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I0626 18:59:00.522385  494291 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I0626 18:59:00.522129  494291 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0626 18:59:00.522417  494291 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0626 18:59:00.522529  494291 cli_runner.go:164] Run: docker container inspect stopped-upgrade-735296 --format={{.State.Status}}
	I0626 18:59:00.522131  494291 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0626 18:59:00.523552  494291 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I0626 18:59:00.523560  494291 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0626 18:59:00.523560  494291 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I0626 18:59:00.523571  494291 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0626 18:59:00.523551  494291 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I0626 18:59:00.523624  494291 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0626 18:59:00.523638  494291 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0626 18:59:00.545482  494291 fix.go:102] recreateIfNeeded on stopped-upgrade-735296: state=Stopped err=<nil>
	W0626 18:59:00.545525  494291 fix.go:128] unexpected machine state, will restart: <nil>
	I0626 18:59:00.547831  494291 out.go:177] * Restarting existing docker container for "stopped-upgrade-735296" ...
	I0626 18:59:00.549275  494291 cli_runner.go:164] Run: docker start stopped-upgrade-735296
	I0626 18:59:00.739580  494291 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0626 18:59:00.775894  494291 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0626 18:59:00.810248  494291 cli_runner.go:164] Run: docker container inspect stopped-upgrade-735296 --format={{.State.Status}}
	I0626 18:59:00.826806  494291 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I0626 18:59:00.829985  494291 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I0626 18:59:00.830358  494291 kic.go:426] container "stopped-upgrade-735296" state is running.
	I0626 18:59:00.830637  494291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-735296
	I0626 18:59:00.867913  494291 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/stopped-upgrade-735296/config.json ...
	I0626 18:59:00.868123  494291 machine.go:88] provisioning docker machine ...
	I0626 18:59:00.868144  494291 ubuntu.go:169] provisioning hostname "stopped-upgrade-735296"
	I0626 18:59:00.868185  494291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-735296
	I0626 18:59:00.884205  494291 cache.go:157] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0626 18:59:00.884231  494291 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 362.360568ms
	I0626 18:59:00.884243  494291 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0626 18:59:00.888502  494291 main.go:141] libmachine: Using SSH client type: native
	I0626 18:59:00.888955  494291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33261 <nil> <nil>}
	I0626 18:59:00.888979  494291 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-735296 && echo "stopped-upgrade-735296" | sudo tee /etc/hostname
	I0626 18:59:00.889543  494291 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36622->127.0.0.1:33261: read: connection reset by peer
	I0626 18:59:00.925230  494291 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0626 18:59:00.949462  494291 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I0626 18:59:00.957488  494291 cache.go:162] opening:  /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I0626 18:59:01.538367  494291 cache.go:157] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0626 18:59:01.538395  494291 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 1.016391375s
	I0626 18:59:01.538411  494291 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0626 18:59:01.943479  494291 cache.go:157] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0626 18:59:01.943510  494291 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.421627465s
	I0626 18:59:01.943522  494291 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0626 18:59:02.133097  494291 cache.go:157] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0626 18:59:02.133121  494291 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.611204309s
	I0626 18:59:02.133134  494291 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0626 18:59:02.152262  494291 cache.go:157] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0626 18:59:02.152290  494291 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.630382684s
	I0626 18:59:02.152306  494291 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0626 18:59:02.747820  494291 cache.go:157] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0626 18:59:02.747848  494291 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 2.225889489s
	I0626 18:59:02.747863  494291 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0626 18:59:03.029597  494291 cache.go:157] /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0626 18:59:03.029622  494291 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.507695595s
	I0626 18:59:03.029636  494291 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0626 18:59:03.029652  494291 cache.go:87] Successfully saved all images to host disk.
	I0626 18:59:04.122884  494291 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-735296
	
	I0626 18:59:04.122988  494291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-735296
	I0626 18:59:04.148513  494291 main.go:141] libmachine: Using SSH client type: native
	I0626 18:59:04.149175  494291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33261 <nil> <nil>}
	I0626 18:59:04.149211  494291 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-735296' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-735296/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-735296' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0626 18:59:04.256729  494291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0626 18:59:04.256763  494291 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16761-330054/.minikube CaCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16761-330054/.minikube}
	I0626 18:59:04.256819  494291 ubuntu.go:177] setting up certificates
	I0626 18:59:04.256834  494291 provision.go:83] configureAuth start
	I0626 18:59:04.256918  494291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-735296
	I0626 18:59:04.274349  494291 provision.go:138] copyHostCerts
	I0626 18:59:04.274437  494291 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem, removing ...
	I0626 18:59:04.274455  494291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem
	I0626 18:59:04.274563  494291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/ca.pem (1082 bytes)
	I0626 18:59:04.274713  494291 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem, removing ...
	I0626 18:59:04.274722  494291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem
	I0626 18:59:04.274762  494291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/cert.pem (1123 bytes)
	I0626 18:59:04.274856  494291 exec_runner.go:144] found /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem, removing ...
	I0626 18:59:04.274862  494291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem
	I0626 18:59:04.274896  494291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16761-330054/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16761-330054/.minikube/key.pem (1679 bytes)
	I0626 18:59:04.274971  494291 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-735296 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-735296]
	I0626 18:59:04.459146  494291 provision.go:172] copyRemoteCerts
	I0626 18:59:04.459220  494291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0626 18:59:04.459265  494291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-735296
	I0626 18:59:04.476436  494291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33261 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/stopped-upgrade-735296/id_rsa Username:docker}
	I0626 18:59:04.559740  494291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0626 18:59:04.576622  494291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0626 18:59:04.595831  494291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0626 18:59:04.613897  494291 provision.go:86] duration metric: configureAuth took 357.036641ms
	I0626 18:59:04.613934  494291 ubuntu.go:193] setting minikube options for container-runtime
	I0626 18:59:04.614148  494291 config.go:182] Loaded profile config "stopped-upgrade-735296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0626 18:59:04.614264  494291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-735296
	I0626 18:59:04.631711  494291 main.go:141] libmachine: Using SSH client type: native
	I0626 18:59:04.632138  494291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80e840] 0x8118e0 <nil>  [] 0s} 127.0.0.1 33261 <nil> <nil>}
	I0626 18:59:04.632156  494291 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0626 18:59:05.210994  494291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0626 18:59:05.211025  494291 machine.go:91] provisioned docker machine in 4.342886489s
	I0626 18:59:05.211035  494291 start.go:300] post-start starting for "stopped-upgrade-735296" (driver="docker")
	I0626 18:59:05.211045  494291 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0626 18:59:05.211107  494291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0626 18:59:05.211157  494291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-735296
	I0626 18:59:05.228912  494291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33261 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/stopped-upgrade-735296/id_rsa Username:docker}
	I0626 18:59:05.308511  494291 ssh_runner.go:195] Run: cat /etc/os-release
	I0626 18:59:05.311407  494291 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0626 18:59:05.311427  494291 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0626 18:59:05.311436  494291 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0626 18:59:05.311442  494291 info.go:137] Remote host: Ubuntu 19.10
	I0626 18:59:05.311454  494291 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/addons for local assets ...
	I0626 18:59:05.311569  494291 filesync.go:126] Scanning /home/jenkins/minikube-integration/16761-330054/.minikube/files for local assets ...
	I0626 18:59:05.311666  494291 filesync.go:149] local asset: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem -> 3369352.pem in /etc/ssl/certs
	I0626 18:59:05.311775  494291 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0626 18:59:05.318361  494291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/ssl/certs/3369352.pem --> /etc/ssl/certs/3369352.pem (1708 bytes)
	I0626 18:59:05.335127  494291 start.go:303] post-start completed in 124.073455ms
	I0626 18:59:05.335229  494291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0626 18:59:05.335274  494291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-735296
	I0626 18:59:05.352495  494291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33261 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/stopped-upgrade-735296/id_rsa Username:docker}
	I0626 18:59:05.433360  494291 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0626 18:59:05.437177  494291 fix.go:56] fixHost completed within 4.914984256s
	I0626 18:59:05.437204  494291 start.go:83] releasing machines lock for "stopped-upgrade-735296", held for 4.915032909s
	I0626 18:59:05.437271  494291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-735296
	I0626 18:59:05.454392  494291 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0626 18:59:05.454452  494291 ssh_runner.go:195] Run: cat /version.json
	I0626 18:59:05.454468  494291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-735296
	I0626 18:59:05.454507  494291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-735296
	I0626 18:59:05.472361  494291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33261 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/stopped-upgrade-735296/id_rsa Username:docker}
	I0626 18:59:05.472911  494291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33261 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/stopped-upgrade-735296/id_rsa Username:docker}
	W0626 18:59:05.548134  494291 start.go:493] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0626 18:59:05.548218  494291 ssh_runner.go:195] Run: systemctl --version
	I0626 18:59:05.600520  494291 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0626 18:59:05.656511  494291 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0626 18:59:05.660807  494291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 18:59:05.676340  494291 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0626 18:59:05.676453  494291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0626 18:59:05.700312  494291 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0626 18:59:05.700341  494291 start.go:466] detecting cgroup driver to use...
	I0626 18:59:05.700373  494291 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0626 18:59:05.700419  494291 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0626 18:59:05.720759  494291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0626 18:59:05.729833  494291 docker.go:196] disabling cri-docker service (if available) ...
	I0626 18:59:05.729888  494291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0626 18:59:05.739024  494291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0626 18:59:05.748070  494291 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0626 18:59:05.756987  494291 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0626 18:59:05.757050  494291 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0626 18:59:05.820951  494291 docker.go:212] disabling docker service ...
	I0626 18:59:05.821008  494291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0626 18:59:05.830558  494291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0626 18:59:05.839512  494291 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0626 18:59:05.902568  494291 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0626 18:59:05.970598  494291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0626 18:59:05.980187  494291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0626 18:59:05.992066  494291 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0626 18:59:05.992125  494291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0626 18:59:06.001545  494291 out.go:177] 
	W0626 18:59:06.002871  494291 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0626 18:59:06.002894  494291 out.go:239] * 
	* 
	W0626 18:59:06.003880  494291 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0626 18:59:06.005660  494291 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-735296 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (111.91s)

                                                
                                    

Test pass (274/303)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 15.86
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.3/json-events 17.07
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.19
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
18 TestDownloadOnlyKic 1.17
19 TestBinaryMirror 1.11
20 TestOffline 74.82
22 TestAddons/Setup 132.17
24 TestAddons/parallel/Registry 19.33
26 TestAddons/parallel/InspektorGadget 10.44
27 TestAddons/parallel/MetricsServer 5.67
28 TestAddons/parallel/HelmTiller 11.65
30 TestAddons/parallel/CSI 82.91
31 TestAddons/parallel/Headlamp 14.66
32 TestAddons/parallel/CloudSpanner 5.36
35 TestAddons/serial/GCPAuth/Namespaces 0.12
36 TestAddons/StoppedEnableDisable 12.06
37 TestCertOptions 28.41
38 TestCertExpiration 241.26
40 TestForceSystemdFlag 32.16
41 TestForceSystemdEnv 35.72
42 TestKVMDriverInstallOrUpdate 3.65
46 TestErrorSpam/setup 24.49
47 TestErrorSpam/start 0.55
48 TestErrorSpam/status 0.82
49 TestErrorSpam/pause 1.45
50 TestErrorSpam/unpause 1.46
51 TestErrorSpam/stop 1.36
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 66.98
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 41.94
58 TestFunctional/serial/KubeContext 0.04
59 TestFunctional/serial/KubectlGetPods 0.06
62 TestFunctional/serial/CacheCmd/cache/add_remote 3.05
63 TestFunctional/serial/CacheCmd/cache/add_local 1.72
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
65 TestFunctional/serial/CacheCmd/cache/list 0.04
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
67 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
68 TestFunctional/serial/CacheCmd/cache/delete 0.09
69 TestFunctional/serial/MinikubeKubectlCmd 0.1
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
71 TestFunctional/serial/ExtraConfig 32.69
72 TestFunctional/serial/ComponentHealth 0.07
73 TestFunctional/serial/LogsCmd 1.32
74 TestFunctional/serial/LogsFileCmd 1.35
75 TestFunctional/serial/InvalidService 4.05
77 TestFunctional/parallel/ConfigCmd 0.33
78 TestFunctional/parallel/DashboardCmd 14.51
79 TestFunctional/parallel/DryRun 0.35
80 TestFunctional/parallel/InternationalLanguage 0.17
81 TestFunctional/parallel/StatusCmd 1.11
85 TestFunctional/parallel/ServiceCmdConnect 12.55
86 TestFunctional/parallel/AddonsCmd 0.13
87 TestFunctional/parallel/PersistentVolumeClaim 34.24
89 TestFunctional/parallel/SSHCmd 0.61
90 TestFunctional/parallel/CpCmd 1.23
91 TestFunctional/parallel/MySQL 22.75
92 TestFunctional/parallel/FileSync 0.24
93 TestFunctional/parallel/CertSync 1.69
97 TestFunctional/parallel/NodeLabels 0.06
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
101 TestFunctional/parallel/License 0.52
102 TestFunctional/parallel/ServiceCmd/DeployApp 9.21
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.31
108 TestFunctional/parallel/ServiceCmd/List 0.46
109 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
110 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
111 TestFunctional/parallel/ServiceCmd/Format 0.46
112 TestFunctional/parallel/ServiceCmd/URL 0.59
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
115 TestFunctional/parallel/ProfileCmd/profile_list 0.36
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
121 TestFunctional/parallel/MountCmd/any-port 11.15
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
126 TestFunctional/parallel/Version/short 0.05
127 TestFunctional/parallel/Version/components 0.52
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
132 TestFunctional/parallel/ImageCommands/ImageBuild 3.04
133 TestFunctional/parallel/ImageCommands/Setup 1.95
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.68
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.88
136 TestFunctional/parallel/MountCmd/specific-port 2.03
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.63
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.66
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.24
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.18
143 TestFunctional/delete_addon-resizer_images 0.07
144 TestFunctional/delete_my-image_image 0.02
145 TestFunctional/delete_minikube_cached_images 0.02
149 TestIngressAddonLegacy/StartLegacyK8sCluster 96.64
151 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.13
152 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.34
156 TestJSONOutput/start/Command 68.97
157 TestJSONOutput/start/Audit 0
159 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/pause/Command 0.63
163 TestJSONOutput/pause/Audit 0
165 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/unpause/Command 0.57
169 TestJSONOutput/unpause/Audit 0
171 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/stop/Command 5.67
175 TestJSONOutput/stop/Audit 0
177 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
179 TestErrorJSONOutput 0.19
181 TestKicCustomNetwork/create_custom_network 41.13
182 TestKicCustomNetwork/use_default_bridge_network 27.09
183 TestKicExistingNetwork 27.59
184 TestKicCustomSubnet 27.48
185 TestKicStaticIP 26.58
186 TestMainNoArgs 0.04
187 TestMinikubeProfile 52.52
190 TestMountStart/serial/StartWithMountFirst 5.51
191 TestMountStart/serial/VerifyMountFirst 0.23
192 TestMountStart/serial/StartWithMountSecond 5.04
193 TestMountStart/serial/VerifyMountSecond 0.24
194 TestMountStart/serial/DeleteFirst 1.63
195 TestMountStart/serial/VerifyMountPostDelete 0.23
196 TestMountStart/serial/Stop 1.18
197 TestMountStart/serial/RestartStopped 6.87
198 TestMountStart/serial/VerifyMountPostStop 0.23
201 TestMultiNode/serial/FreshStart2Nodes 107.02
202 TestMultiNode/serial/DeployApp2Nodes 4.98
204 TestMultiNode/serial/AddNode 18.69
205 TestMultiNode/serial/ProfileList 0.26
206 TestMultiNode/serial/CopyFile 8.6
207 TestMultiNode/serial/StopNode 2.08
208 TestMultiNode/serial/StartAfterStop 10.86
209 TestMultiNode/serial/RestartKeepsNodes 110.55
210 TestMultiNode/serial/DeleteNode 4.59
211 TestMultiNode/serial/StopMultiNode 23.82
212 TestMultiNode/serial/RestartMultiNode 79.7
213 TestMultiNode/serial/ValidateNameConflict 23.01
218 TestPreload 154.58
220 TestScheduledStopUnix 96.68
223 TestInsufficientStorage 12.83
226 TestKubernetesUpgrade 350.99
227 TestMissingContainerUpgrade 174.92
228 TestStoppedBinaryUpgrade/Setup 2.42
237 TestNetworkPlugins/group/false 6.02
248 TestStoppedBinaryUpgrade/MinikubeLogs 0.5
250 TestPause/serial/Start 44.2
251 TestPause/serial/SecondStartNoReconfiguration 42.17
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
254 TestNoKubernetes/serial/StartWithK8s 25.78
255 TestPause/serial/Pause 0.67
256 TestPause/serial/VerifyStatus 0.33
257 TestPause/serial/Unpause 0.67
258 TestPause/serial/PauseAgain 0.78
259 TestPause/serial/DeletePaused 2.72
260 TestPause/serial/VerifyDeletedResources 0.67
261 TestNoKubernetes/serial/StartWithStopK8s 8.74
262 TestNetworkPlugins/group/auto/Start 70.46
263 TestNoKubernetes/serial/Start 4.38
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
265 TestNoKubernetes/serial/ProfileList 1.39
266 TestNoKubernetes/serial/Stop 1.19
267 TestNoKubernetes/serial/StartNoArgs 6.44
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
269 TestNetworkPlugins/group/kindnet/Start 71.65
270 TestNetworkPlugins/group/auto/KubeletFlags 0.25
271 TestNetworkPlugins/group/auto/NetCatPod 10.28
272 TestNetworkPlugins/group/auto/DNS 0.16
273 TestNetworkPlugins/group/auto/Localhost 0.14
274 TestNetworkPlugins/group/auto/HairPin 0.13
275 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
276 TestNetworkPlugins/group/calico/Start 69.99
277 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
278 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
279 TestNetworkPlugins/group/custom-flannel/Start 57.27
280 TestNetworkPlugins/group/kindnet/DNS 0.17
281 TestNetworkPlugins/group/kindnet/Localhost 0.15
282 TestNetworkPlugins/group/kindnet/HairPin 0.14
283 TestNetworkPlugins/group/flannel/Start 61.14
284 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
285 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
286 TestNetworkPlugins/group/calico/ControllerPod 5.02
287 TestNetworkPlugins/group/calico/KubeletFlags 0.24
288 TestNetworkPlugins/group/calico/NetCatPod 10.34
289 TestNetworkPlugins/group/custom-flannel/DNS 0.17
290 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
291 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
292 TestNetworkPlugins/group/calico/DNS 0.17
293 TestNetworkPlugins/group/calico/Localhost 0.14
294 TestNetworkPlugins/group/calico/HairPin 0.15
295 TestNetworkPlugins/group/flannel/ControllerPod 5.02
296 TestNetworkPlugins/group/bridge/Start 78.89
297 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
298 TestNetworkPlugins/group/flannel/NetCatPod 10.18
299 TestNetworkPlugins/group/enable-default-cni/Start 50.58
300 TestNetworkPlugins/group/flannel/DNS 0.17
301 TestNetworkPlugins/group/flannel/Localhost 0.16
302 TestNetworkPlugins/group/flannel/HairPin 0.15
304 TestStartStop/group/old-k8s-version/serial/FirstStart 116.57
305 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
306 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.36
307 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
308 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
309 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
310 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
311 TestNetworkPlugins/group/bridge/NetCatPod 11.35
312 TestNetworkPlugins/group/bridge/DNS 0.16
313 TestNetworkPlugins/group/bridge/Localhost 0.19
314 TestNetworkPlugins/group/bridge/HairPin 0.2
316 TestStartStop/group/no-preload/serial/FirstStart 64.54
318 TestStartStop/group/embed-certs/serial/FirstStart 71.63
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.94
321 TestStartStop/group/old-k8s-version/serial/DeployApp 10.37
322 TestStartStop/group/no-preload/serial/DeployApp 10.38
323 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.57
324 TestStartStop/group/old-k8s-version/serial/Stop 11.99
325 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.75
326 TestStartStop/group/no-preload/serial/Stop 11.95
327 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
328 TestStartStop/group/old-k8s-version/serial/SecondStart 432.25
329 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
330 TestStartStop/group/no-preload/serial/SecondStart 332.45
331 TestStartStop/group/embed-certs/serial/DeployApp 11.65
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.5
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.74
334 TestStartStop/group/embed-certs/serial/Stop 12.06
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.92
337 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
338 TestStartStop/group/embed-certs/serial/SecondStart 590.03
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.14
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 585.75
341 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.02
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
344 TestStartStop/group/no-preload/serial/Pause 3.42
346 TestStartStop/group/newest-cni/serial/FirstStart 41.92
347 TestStartStop/group/newest-cni/serial/DeployApp 0
348 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.72
349 TestStartStop/group/newest-cni/serial/Stop 2.13
350 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
351 TestStartStop/group/newest-cni/serial/SecondStart 25.65
352 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
353 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
356 TestStartStop/group/newest-cni/serial/Pause 2.42
357 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
358 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
359 TestStartStop/group/old-k8s-version/serial/Pause 2.57
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
364 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
365 TestStartStop/group/embed-certs/serial/Pause 2.59
366 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
367 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.52
x
+
TestDownloadOnly/v1.16.0/json-events (15.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-015118 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-015118 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (15.861466009s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (15.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-015118
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-015118: exit status 85 (56.179069ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-015118 | jenkins | v1.30.1 | 26 Jun 23 18:25 UTC |          |
	|         | -p download-only-015118        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 18:25:43
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 18:25:43.478416  336946 out.go:296] Setting OutFile to fd 1 ...
	I0626 18:25:43.478563  336946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:25:43.478572  336946 out.go:309] Setting ErrFile to fd 2...
	I0626 18:25:43.478576  336946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:25:43.478681  336946 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	W0626 18:25:43.478794  336946 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16761-330054/.minikube/config/config.json: open /home/jenkins/minikube-integration/16761-330054/.minikube/config/config.json: no such file or directory
	I0626 18:25:43.479340  336946 out.go:303] Setting JSON to true
	I0626 18:25:43.480894  336946 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4093,"bootTime":1687799850,"procs":1024,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 18:25:43.480960  336946 start.go:137] virtualization: kvm guest
	I0626 18:25:43.483449  336946 out.go:97] [download-only-015118] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 18:25:43.484966  336946 out.go:169] MINIKUBE_LOCATION=16761
	W0626 18:25:43.483546  336946 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball: no such file or directory
	I0626 18:25:43.483581  336946 notify.go:220] Checking for updates...
	I0626 18:25:43.487563  336946 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 18:25:43.489127  336946 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:25:43.490608  336946 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	I0626 18:25:43.492017  336946 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0626 18:25:43.494680  336946 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0626 18:25:43.494870  336946 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 18:25:43.515743  336946 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0626 18:25:43.515801  336946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:25:43.559837  336946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-06-26 18:25:43.551034185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:25:43.559959  336946 docker.go:294] overlay module found
	I0626 18:25:43.561781  336946 out.go:97] Using the docker driver based on user configuration
	I0626 18:25:43.561805  336946 start.go:297] selected driver: docker
	I0626 18:25:43.561810  336946 start.go:954] validating driver "docker" against <nil>
	I0626 18:25:43.561887  336946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:25:43.605256  336946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-06-26 18:25:43.597272793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:25:43.605436  336946 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0626 18:25:43.605902  336946 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0626 18:25:43.606038  336946 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0626 18:25:43.607915  336946 out.go:169] Using Docker driver with root privileges
	I0626 18:25:43.609114  336946 cni.go:84] Creating CNI manager for ""
	I0626 18:25:43.609135  336946 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0626 18:25:43.609146  336946 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0626 18:25:43.609158  336946 start_flags.go:319] config:
	{Name:download-only-015118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-015118 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:25:43.610408  336946 out.go:97] Starting control plane node download-only-015118 in cluster download-only-015118
	I0626 18:25:43.610441  336946 cache.go:122] Beginning downloading kic base image for docker with crio
	I0626 18:25:43.611552  336946 out.go:97] Pulling base image ...
	I0626 18:25:43.611575  336946 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0626 18:25:43.611675  336946 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon
	I0626 18:25:43.626659  336946 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 to local cache
	I0626 18:25:43.626848  336946 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local cache directory
	I0626 18:25:43.626949  336946 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 to local cache
	I0626 18:25:43.786932  336946 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0626 18:25:43.786970  336946 cache.go:57] Caching tarball of preloaded images
	I0626 18:25:43.787157  336946 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0626 18:25:43.789228  336946 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0626 18:25:43.789244  336946 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0626 18:25:43.962067  336946 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0626 18:25:54.780969  336946 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 as a tarball
	I0626 18:25:56.087152  336946 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0626 18:25:56.087245  336946 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0626 18:25:56.930778  336946 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0626 18:25:56.931119  336946 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/download-only-015118/config.json ...
	I0626 18:25:56.931149  336946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/download-only-015118/config.json: {Name:mk6a265c2ec9b03474450a6b890466e8b18d87dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0626 18:25:56.931335  336946 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0626 18:25:56.931518  336946 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-015118"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (17.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-015118 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-015118 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (17.067112259s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (17.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-015118
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-015118: exit status 85 (59.46586ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-015118 | jenkins | v1.30.1 | 26 Jun 23 18:25 UTC |          |
	|         | -p download-only-015118        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-015118 | jenkins | v1.30.1 | 26 Jun 23 18:25 UTC |          |
	|         | -p download-only-015118        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/26 18:25:59
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0626 18:25:59.398906  337110 out.go:296] Setting OutFile to fd 1 ...
	I0626 18:25:59.399058  337110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:25:59.399067  337110 out.go:309] Setting ErrFile to fd 2...
	I0626 18:25:59.399072  337110 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:25:59.399197  337110 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	W0626 18:25:59.399311  337110 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16761-330054/.minikube/config/config.json: open /home/jenkins/minikube-integration/16761-330054/.minikube/config/config.json: no such file or directory
	I0626 18:25:59.399736  337110 out.go:303] Setting JSON to true
	I0626 18:25:59.401290  337110 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4109,"bootTime":1687799850,"procs":1017,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 18:25:59.401361  337110 start.go:137] virtualization: kvm guest
	I0626 18:25:59.403482  337110 out.go:97] [download-only-015118] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 18:25:59.405153  337110 out.go:169] MINIKUBE_LOCATION=16761
	I0626 18:25:59.403681  337110 notify.go:220] Checking for updates...
	I0626 18:25:59.408304  337110 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 18:25:59.409814  337110 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:25:59.411414  337110 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	I0626 18:25:59.412807  337110 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0626 18:25:59.415099  337110 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0626 18:25:59.415506  337110 config.go:182] Loaded profile config "download-only-015118": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0626 18:25:59.415561  337110 start.go:862] api.Load failed for download-only-015118: filestore "download-only-015118": Docker machine "download-only-015118" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0626 18:25:59.415671  337110 driver.go:373] Setting default libvirt URI to qemu:///system
	W0626 18:25:59.415718  337110 start.go:862] api.Load failed for download-only-015118: filestore "download-only-015118": Docker machine "download-only-015118" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0626 18:25:59.436437  337110 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0626 18:25:59.436514  337110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:25:59.486255  337110 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-06-26 18:25:59.477671969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:25:59.486357  337110 docker.go:294] overlay module found
	I0626 18:25:59.488261  337110 out.go:97] Using the docker driver based on existing profile
	I0626 18:25:59.488281  337110 start.go:297] selected driver: docker
	I0626 18:25:59.488286  337110 start.go:954] validating driver "docker" against &{Name:download-only-015118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-015118 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:25:59.488428  337110 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:25:59.533822  337110 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-06-26 18:25:59.525469321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:25:59.534641  337110 cni.go:84] Creating CNI manager for ""
	I0626 18:25:59.534669  337110 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0626 18:25:59.534683  337110 start_flags.go:319] config:
	{Name:download-only-015118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-015118 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:25:59.536816  337110 out.go:97] Starting control plane node download-only-015118 in cluster download-only-015118
	I0626 18:25:59.536855  337110 cache.go:122] Beginning downloading kic base image for docker with crio
	I0626 18:25:59.538240  337110 out.go:97] Pulling base image ...
	I0626 18:25:59.538261  337110 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 18:25:59.538369  337110 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local docker daemon
	I0626 18:25:59.553413  337110 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 to local cache
	I0626 18:25:59.553560  337110 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local cache directory
	I0626 18:25:59.553576  337110 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 in local cache directory, skipping pull
	I0626 18:25:59.553580  337110 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 exists in cache, skipping pull
	I0626 18:25:59.553592  337110 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 as a tarball
	I0626 18:25:59.632565  337110 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 18:25:59.632601  337110 cache.go:57] Caching tarball of preloaded images
	I0626 18:25:59.632785  337110 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 18:25:59.634751  337110 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0626 18:25:59.634775  337110 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 ...
	I0626 18:25:59.733946  337110 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:36a3ccedce25b36b9ffc5201ce124dec -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0626 18:26:13.121154  337110 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 ...
	I0626 18:26:13.121253  337110 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16761-330054/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 ...
	I0626 18:26:14.005966  337110 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0626 18:26:14.006112  337110 profile.go:148] Saving config to /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/download-only-015118/config.json ...
	I0626 18:26:14.006331  337110 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0626 18:26:14.006516  337110 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/16761-330054/.minikube/cache/linux/amd64/v1.27.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-015118"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-015118
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.17s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-097588 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-097588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-097588
--- PASS: TestDownloadOnlyKic (1.17s)

                                                
                                    
x
+
TestBinaryMirror (1.11s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-877393 --alsologtostderr --binary-mirror http://127.0.0.1:32817 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-877393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-877393
--- PASS: TestBinaryMirror (1.11s)

                                                
                                    
x
+
TestOffline (74.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-715574 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-715574 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m10.49459167s)
helpers_test.go:175: Cleaning up "offline-crio-715574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-715574
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-715574: (4.320968648s)
--- PASS: TestOffline (74.82s)

                                                
                                    
x
+
TestAddons/Setup (132.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-052687 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-052687 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m12.164779357s)
--- PASS: TestAddons/Setup (132.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 14.217353ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-2sks6" [26f467fb-cc2f-4224-9db7-9f814feb6f78] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010818318s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g4qzm" [c00bd5d7-17c7-4e9d-ab97-78cf04ed731d] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.035221382s
addons_test.go:316: (dbg) Run:  kubectl --context addons-052687 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-052687 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-052687 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.525956185s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-052687 ip
2023/06/26 18:28:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-052687 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.33s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qfbgl" [112f3086-45b0-4968-a317-11b340f22159] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007980131s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-052687
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-052687: (5.432670567s)
--- PASS: TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 13.316229ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-6zgpf" [82860bec-db0a-4d75-a5af-543a8abf33c3] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01103371s
addons_test.go:391: (dbg) Run:  kubectl --context addons-052687 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-052687 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.67s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.65s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 12.783488ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-fxdms" [91bdb879-b774-43fb-a404-ab2bdc6d3ef4] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010106536s
addons_test.go:449: (dbg) Run:  kubectl --context addons-052687 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-052687 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.02700547s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-052687 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.65s)

                                                
                                    
x
+
TestAddons/parallel/CSI (82.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.390858ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-052687 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-052687 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8706b266-5a62-4cfe-bc10-3716028e7b83] Pending
helpers_test.go:344: "task-pv-pod" [8706b266-5a62-4cfe-bc10-3716028e7b83] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8706b266-5a62-4cfe-bc10-3716028e7b83] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.006998149s
addons_test.go:560: (dbg) Run:  kubectl --context addons-052687 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-052687 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-052687 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-052687 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-052687 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-052687 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-052687 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-052687 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [15e8f0f1-d6f0-4e52-a18b-a0d618cd2dbd] Pending
helpers_test.go:344: "task-pv-pod-restore" [15e8f0f1-d6f0-4e52-a18b-a0d618cd2dbd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [15e8f0f1-d6f0-4e52-a18b-a0d618cd2dbd] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.007347348s
addons_test.go:602: (dbg) Run:  kubectl --context addons-052687 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-052687 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-052687 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-052687 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-052687 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.437413287s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-052687 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (82.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-052687 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-pbtjd" [568e6718-7cab-4846-9138-d62b96ac4953] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-pbtjd" [568e6718-7cab-4846-9138-d62b96ac4953] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.029307119s
--- PASS: TestAddons/parallel/Headlamp (14.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-fb67554b8-qnwh7" [e0e11406-aa73-41ad-a642-06b55e71c163] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008994615s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-052687
--- PASS: TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-052687 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-052687 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-052687
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-052687: (11.88515851s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-052687
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-052687
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-052687
--- PASS: TestAddons/StoppedEnableDisable (12.06s)

                                                
                                    
x
+
TestCertOptions (28.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-875713 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-875713 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.970591647s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-875713 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-875713 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-875713 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-875713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-875713
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-875713: (1.867034616s)
--- PASS: TestCertOptions (28.41s)

                                                
                                    
x
+
TestCertExpiration (241.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-604881 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-604881 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (30.887431414s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-604881 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-604881 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.047884549s)
helpers_test.go:175: Cleaning up "cert-expiration-604881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-604881
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-604881: (4.322704951s)
--- PASS: TestCertExpiration (241.26s)

                                                
                                    
x
+
TestForceSystemdFlag (32.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-472086 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-472086 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.846735649s)
docker_test.go:126: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-472086 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
E0626 18:58:31.373001  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "force-systemd-flag-472086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-472086
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-472086: (4.886402121s)
--- PASS: TestForceSystemdFlag (32.16s)

                                                
                                    
x
+
TestForceSystemdEnv (35.72s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-550603 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0626 18:57:56.659305  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-550603 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.796954352s)
helpers_test.go:175: Cleaning up "force-systemd-env-550603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-550603
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-550603: (4.926561995s)
--- PASS: TestForceSystemdEnv (35.72s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.65s)

                                                
                                    
x
+
TestErrorSpam/setup (24.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-857847 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-857847 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-857847 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-857847 --driver=docker  --container-runtime=crio: (24.490450813s)
--- PASS: TestErrorSpam/setup (24.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 unpause
--- PASS: TestErrorSpam/unpause (1.46s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 stop: (1.191696946s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-857847 --log_dir /tmp/nospam-857847 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16761-330054/.minikube/files/etc/test/nested/copy/336935/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900227 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0626 18:33:31.372945  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:33:31.379167  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:33:31.389433  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:33:31.409713  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:33:31.449999  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:33:31.530320  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:33:31.690752  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:33:32.011317  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:33:32.652209  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:33:33.932650  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:33:36.494519  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:33:41.615347  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-900227 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m6.977273961s)
--- PASS: TestFunctional/serial/StartWithProxy (66.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900227 --alsologtostderr -v=8
E0626 18:33:51.856025  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:34:12.336204  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-900227 --alsologtostderr -v=8: (41.943308675s)
functional_test.go:659: soft start took 41.944088123s for "functional-900227" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-900227 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 cache add registry.k8s.io/pause:3.1: (1.047656675s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 cache add registry.k8s.io/pause:3.3: (1.000680415s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 cache add registry.k8s.io/pause:latest: (1.001356101s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-900227 /tmp/TestFunctionalserialCacheCmdcacheadd_local2042926039/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 cache add minikube-local-cache-test:functional-900227
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 cache add minikube-local-cache-test:functional-900227: (1.435706835s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 cache delete minikube-local-cache-test:functional-900227
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-900227
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900227 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (252.052351ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 kubectl -- --context functional-900227 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-900227 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.69s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900227 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0626 18:34:53.297004  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-900227 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.688876323s)
functional_test.go:757: restart took 32.688978239s for "functional-900227" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.69s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-900227 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 logs: (1.321237646s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 logs --file /tmp/TestFunctionalserialLogsFileCmd2604428538/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 logs --file /tmp/TestFunctionalserialLogsFileCmd2604428538/001/logs.txt: (1.354004882s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-900227 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-900227
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-900227: exit status 115 (316.357885ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30781 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-900227 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900227 config get cpus: exit status 14 (64.238693ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900227 config get cpus: exit status 14 (44.542709ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-900227 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-900227 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 370753: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900227 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-900227 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (145.870443ms)

                                                
                                                
-- stdout --
	* [functional-900227] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 18:35:24.006325  369759 out.go:296] Setting OutFile to fd 1 ...
	I0626 18:35:24.006523  369759 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:35:24.006536  369759 out.go:309] Setting ErrFile to fd 2...
	I0626 18:35:24.006544  369759 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:35:24.006707  369759 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	I0626 18:35:24.007757  369759 out.go:303] Setting JSON to false
	I0626 18:35:24.009309  369759 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4674,"bootTime":1687799850,"procs":356,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 18:35:24.009383  369759 start.go:137] virtualization: kvm guest
	I0626 18:35:24.011557  369759 out.go:177] * [functional-900227] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 18:35:24.013575  369759 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 18:35:24.013624  369759 notify.go:220] Checking for updates...
	I0626 18:35:24.015110  369759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 18:35:24.016658  369759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:35:24.018030  369759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	I0626 18:35:24.019274  369759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 18:35:24.020460  369759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 18:35:24.022117  369759 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:35:24.022602  369759 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 18:35:24.046333  369759 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0626 18:35:24.046441  369759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:35:24.098050  369759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-06-26 18:35:24.088967933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:35:24.098159  369759 docker.go:294] overlay module found
	I0626 18:35:24.100018  369759 out.go:177] * Using the docker driver based on existing profile
	I0626 18:35:24.101470  369759 start.go:297] selected driver: docker
	I0626 18:35:24.101486  369759 start.go:954] validating driver "docker" against &{Name:functional-900227 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-900227 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:35:24.101591  369759 start.go:965] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 18:35:24.103825  369759 out.go:177] 
	W0626 18:35:24.105518  369759 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0626 18:35:24.106846  369759 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900227 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-900227 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-900227 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (166.889452ms)

                                                
                                                
-- stdout --
	* [functional-900227] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 18:35:24.365246  370031 out.go:296] Setting OutFile to fd 1 ...
	I0626 18:35:24.365386  370031 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:35:24.365396  370031 out.go:309] Setting ErrFile to fd 2...
	I0626 18:35:24.365403  370031 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:35:24.365617  370031 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	I0626 18:35:24.366147  370031 out.go:303] Setting JSON to false
	I0626 18:35:24.367236  370031 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":4674,"bootTime":1687799850,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 18:35:24.367318  370031 start.go:137] virtualization: kvm guest
	I0626 18:35:24.369622  370031 out.go:177] * [functional-900227] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	I0626 18:35:24.370886  370031 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 18:35:24.370932  370031 notify.go:220] Checking for updates...
	I0626 18:35:24.372291  370031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 18:35:24.373561  370031 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:35:24.374957  370031 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	I0626 18:35:24.376622  370031 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 18:35:24.378498  370031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 18:35:24.380973  370031 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:35:24.381519  370031 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 18:35:24.411020  370031 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0626 18:35:24.411219  370031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:35:24.471727  370031 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-06-26 18:35:24.462806974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:35:24.471862  370031 docker.go:294] overlay module found
	I0626 18:35:24.473858  370031 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0626 18:35:24.475117  370031 start.go:297] selected driver: docker
	I0626 18:35:24.475131  370031 start.go:954] validating driver "docker" against &{Name:functional-900227 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-900227 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0626 18:35:24.475256  370031 start.go:965] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 18:35:24.477503  370031 out.go:177] 
	W0626 18:35:24.479011  370031 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0626 18:35:24.480401  370031 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-900227 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-900227 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-jwfxs" [b8fcd924-62ba-4aa5-8e5d-3b0d4a027780] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-jwfxs" [b8fcd924-62ba-4aa5-8e5d-3b0d4a027780] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.007900449s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31839
functional_test.go:1674: http://192.168.49.2:31839: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-jwfxs

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31839
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [65992802-66db-416b-b50d-56a870dea23c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009748226s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-900227 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-900227 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-900227 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-900227 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [17fd8d1a-883c-4e66-baf7-1d2abe902a61] Pending
helpers_test.go:344: "sp-pod" [17fd8d1a-883c-4e66-baf7-1d2abe902a61] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [17fd8d1a-883c-4e66-baf7-1d2abe902a61] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.089922821s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-900227 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-900227 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-900227 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [99c1415f-7e8e-4370-b989-64352162997b] Pending
helpers_test.go:344: "sp-pod" [99c1415f-7e8e-4370-b989-64352162997b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [99c1415f-7e8e-4370-b989-64352162997b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.072721977s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-900227 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh -n functional-900227 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 cp functional-900227:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2794330320/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh -n functional-900227 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-900227 replace --force -f testdata/mysql.yaml
2023/06/26 18:35:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-cjfct" [7067f568-4904-42b0-a614-3f2248063cfd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-cjfct" [7067f568-4904-42b0-a614-3f2248063cfd] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.008737502s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-900227 exec mysql-7db894d786-cjfct -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-900227 exec mysql-7db894d786-cjfct -- mysql -ppassword -e "show databases;": exit status 1 (141.622918ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-900227 exec mysql-7db894d786-cjfct -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-900227 exec mysql-7db894d786-cjfct -- mysql -ppassword -e "show databases;": exit status 1 (145.81684ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-900227 exec mysql-7db894d786-cjfct -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/336935/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "sudo cat /etc/test/nested/copy/336935/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/336935.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "sudo cat /etc/ssl/certs/336935.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/336935.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "sudo cat /usr/share/ca-certificates/336935.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3369352.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "sudo cat /etc/ssl/certs/3369352.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3369352.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "sudo cat /usr/share/ca-certificates/3369352.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-900227 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900227 ssh "sudo systemctl is-active docker": exit status 1 (233.181288ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900227 ssh "sudo systemctl is-active containerd": exit status 1 (240.57774ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-900227 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-900227 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-g4vct" [7485f2f5-5e74-464b-9034-b610ad9931b6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-g4vct" [7485f2f5-5e74-464b-9034-b610ad9931b6] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.016497872s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-900227 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-900227 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-900227 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 367528: os: process already finished
helpers_test.go:502: unable to terminate pid 367206: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-900227 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-900227 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-900227 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [dc4fb1cb-8fea-4646-8f7e-c09360563fc2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [dc4fb1cb-8fea-4646-8f7e-c09360563fc2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.008197108s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 service list -o json
functional_test.go:1493: Took "480.42049ms" to run "out/minikube-linux-amd64 -p functional-900227 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31545
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31545
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-900227 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "297.024387ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "63.041822ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.137.69 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-900227 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-900227 /tmp/TestFunctionalparallelMountCmdany-port2304509798/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1687804523444677296" to /tmp/TestFunctionalparallelMountCmdany-port2304509798/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1687804523444677296" to /tmp/TestFunctionalparallelMountCmdany-port2304509798/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1687804523444677296" to /tmp/TestFunctionalparallelMountCmdany-port2304509798/001/test-1687804523444677296
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900227 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.112447ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 26 18:35 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 26 18:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 26 18:35 test-1687804523444677296
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh cat /mount-9p/test-1687804523444677296
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-900227 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2e024dee-4f26-46df-8d3b-85dc877a5a08] Pending
helpers_test.go:344: "busybox-mount" [2e024dee-4f26-46df-8d3b-85dc877a5a08] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2e024dee-4f26-46df-8d3b-85dc877a5a08] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2e024dee-4f26-46df-8d3b-85dc877a5a08] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.062640551s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-900227 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900227 /tmp/TestFunctionalparallelMountCmdany-port2304509798/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "299.9069ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "42.357326ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-900227 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-900227
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-900227 image ls --format short --alsologtostderr:
I0626 18:35:55.742735  374683 out.go:296] Setting OutFile to fd 1 ...
I0626 18:35:55.742858  374683 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 18:35:55.742875  374683 out.go:309] Setting ErrFile to fd 2...
I0626 18:35:55.742882  374683 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 18:35:55.743013  374683 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
I0626 18:35:55.743748  374683 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 18:35:55.743887  374683 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 18:35:55.744357  374683 cli_runner.go:164] Run: docker container inspect functional-900227 --format={{.State.Status}}
I0626 18:35:55.765904  374683 ssh_runner.go:195] Run: systemctl --version
I0626 18:35:55.765962  374683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900227
I0626 18:35:55.786602  374683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/functional-900227/id_rsa Username:docker}
I0626 18:35:55.877227  374683 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-900227 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| docker.io/library/mysql                 | 5.7                | 2be84dd575ee2 | 588MB  |
| docker.io/library/nginx                 | alpine             | 4937520ae206c | 43.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| gcr.io/google-containers/addon-resizer  | functional-900227  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.27.3            | 41697ceeb70b3 | 59.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | eb4a571591807 | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-proxy              | v1.27.3            | 5780543258cf0 | 72.7MB |
| registry.k8s.io/kube-apiserver          | v1.27.3            | 08a0c939e61b7 | 122MB  |
| registry.k8s.io/kube-controller-manager | v1.27.3            | 7cffc01dba0e1 | 114MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-900227 image ls --format table --alsologtostderr:
I0626 18:35:55.971739  374853 out.go:296] Setting OutFile to fd 1 ...
I0626 18:35:55.971900  374853 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 18:35:55.971910  374853 out.go:309] Setting ErrFile to fd 2...
I0626 18:35:55.971917  374853 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 18:35:55.972043  374853 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
I0626 18:35:55.972681  374853 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 18:35:55.972825  374853 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 18:35:55.973364  374853 cli_runner.go:164] Run: docker container inspect functional-900227 --format={{.State.Status}}
I0626 18:35:55.990656  374853 ssh_runner.go:195] Run: systemctl --version
I0626 18:35:55.990703  374853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900227
I0626 18:35:56.010955  374853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/functional-900227/id_rsa Username:docker}
I0626 18:35:56.105409  374853 ssh_runner.go:195] Run: sudo crictl images --output json
W0626 18:35:56.152332  374853 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 078fe03c-0d7e-4e10-af46-52ec90568397
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-900227 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e","registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"113919286"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256
:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-900227"],"size":"34114467"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af
9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb","reg
istry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"122065872"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":["registry.k8s.io/kube-proxy@sha256:091c9fe8428334e24
51a0e5d214d40c415f2e0d0861794ee941f48003726570f","registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"72713623"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"eb4a57159180767450cb8426e6367f11b999653d8f185b5e3b78a9ca30c2c31d","repoDigests":["docker.io/library/nginx@sha256:593dac25b7733ffb7afe1a72649a43e574778bf025ad60514ef40f6b5d606247","docker.io/library/nginx@sha256:d2b2f2980e9ccc570e5726b56b54580f23a018b7b7314c9eaff7e5e479c78657"],"repoTags":["docker.io/library/nginx:latest"],"size":"191044354"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/
k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082","registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"59811126"},{"id":"2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0","repoDigests":["docker.io/library/mysql@sha256:03b6d
cedf5a2754da00e119e2cc6094ed3c884ad36b67bb25fe67be4b4f9bdb1","docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde"],"repoTags":["docker.io/library/mysql:5.7"],"size":"588268197"},{"id":"4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02","repoDigests":["docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6","docker.io/library/nginx@sha256:2d4efe74ef541248b0a70838c557de04509d1115dec6bfc21ad0d66e41574a8a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"43220780"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-900227 image ls --format json --alsologtostderr:
I0626 18:35:55.968646  374846 out.go:296] Setting OutFile to fd 1 ...
I0626 18:35:55.968785  374846 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 18:35:55.968795  374846 out.go:309] Setting ErrFile to fd 2...
I0626 18:35:55.968800  374846 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 18:35:55.968971  374846 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
I0626 18:35:55.969580  374846 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 18:35:55.969688  374846 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 18:35:55.970057  374846 cli_runner.go:164] Run: docker container inspect functional-900227 --format={{.State.Status}}
I0626 18:35:55.986924  374846 ssh_runner.go:195] Run: systemctl --version
I0626 18:35:55.986979  374846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900227
I0626 18:35:56.007320  374846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/functional-900227/id_rsa Username:docker}
I0626 18:35:56.101869  374846 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-900227 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: 5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests:
- registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "72713623"
- id: 41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "59811126"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
- docker.io/library/nginx@sha256:2d4efe74ef541248b0a70838c557de04509d1115dec6bfc21ad0d66e41574a8a
repoTags:
- docker.io/library/nginx:alpine
size: "43220780"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-900227
size: "34114467"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
- registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "113919286"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0
repoDigests:
- docker.io/library/mysql@sha256:03b6dcedf5a2754da00e119e2cc6094ed3c884ad36b67bb25fe67be4b4f9bdb1
- docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde
repoTags:
- docker.io/library/mysql:5.7
size: "588268197"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: eb4a57159180767450cb8426e6367f11b999653d8f185b5e3b78a9ca30c2c31d
repoDigests:
- docker.io/library/nginx@sha256:593dac25b7733ffb7afe1a72649a43e574778bf025ad60514ef40f6b5d606247
- docker.io/library/nginx@sha256:d2b2f2980e9ccc570e5726b56b54580f23a018b7b7314c9eaff7e5e479c78657
repoTags:
- docker.io/library/nginx:latest
size: "191044354"
- id: 08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "122065872"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-900227 image ls --format yaml --alsologtostderr:
I0626 18:35:55.748309  374681 out.go:296] Setting OutFile to fd 1 ...
I0626 18:35:55.748467  374681 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 18:35:55.748474  374681 out.go:309] Setting ErrFile to fd 2...
I0626 18:35:55.748480  374681 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 18:35:55.748667  374681 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
I0626 18:35:55.751661  374681 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 18:35:55.751823  374681 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 18:35:55.752412  374681 cli_runner.go:164] Run: docker container inspect functional-900227 --format={{.State.Status}}
I0626 18:35:55.769992  374681 ssh_runner.go:195] Run: systemctl --version
I0626 18:35:55.770040  374681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900227
I0626 18:35:55.786947  374681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/functional-900227/id_rsa Username:docker}
I0626 18:35:55.880918  374681 ssh_runner.go:195] Run: sudo crictl images --output json
W0626 18:35:55.921314  374681 root.go:91] failed to log command end to audit: failed to find a log row with id equals to ded5e293-143d-4d38-bc97-49385156684b
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900227 ssh pgrep buildkitd: exit status 1 (255.605586ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image build -t localhost/my-image:functional-900227 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 image build -t localhost/my-image:functional-900227 testdata/build --alsologtostderr: (2.551022384s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-900227 image build -t localhost/my-image:functional-900227 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7b1a20d6fe1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-900227
--> eccb84896e3
Successfully tagged localhost/my-image:functional-900227
eccb84896e313340949daadcd28ab3b8f3b2265f5e7353882a08ad81d3db7ccd
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-900227 image build -t localhost/my-image:functional-900227 testdata/build --alsologtostderr:
I0626 18:35:56.002856  374869 out.go:296] Setting OutFile to fd 1 ...
I0626 18:35:56.003023  374869 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 18:35:56.003034  374869 out.go:309] Setting ErrFile to fd 2...
I0626 18:35:56.003041  374869 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0626 18:35:56.003191  374869 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
I0626 18:35:56.003816  374869 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 18:35:56.004531  374869 config.go:182] Loaded profile config "functional-900227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0626 18:35:56.005088  374869 cli_runner.go:164] Run: docker container inspect functional-900227 --format={{.State.Status}}
I0626 18:35:56.023079  374869 ssh_runner.go:195] Run: systemctl --version
I0626 18:35:56.023139  374869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900227
I0626 18:35:56.042299  374869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33092 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/functional-900227/id_rsa Username:docker}
I0626 18:35:56.138175  374869 build_images.go:151] Building image from path: /tmp/build.949289295.tar
I0626 18:35:56.138263  374869 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0626 18:35:56.148688  374869 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.949289295.tar
I0626 18:35:56.196035  374869 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.949289295.tar: stat -c "%s %y" /var/lib/minikube/build/build.949289295.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.949289295.tar': No such file or directory
I0626 18:35:56.196070  374869 ssh_runner.go:362] scp /tmp/build.949289295.tar --> /var/lib/minikube/build/build.949289295.tar (3072 bytes)
I0626 18:35:56.219281  374869 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.949289295
I0626 18:35:56.227736  374869 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.949289295 -xf /var/lib/minikube/build/build.949289295.tar
I0626 18:35:56.236573  374869 crio.go:297] Building image: /var/lib/minikube/build/build.949289295
I0626 18:35:56.236638  374869 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-900227 /var/lib/minikube/build/build.949289295 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0626 18:35:58.483255  374869 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-900227 /var/lib/minikube/build/build.949289295 --cgroup-manager=cgroupfs: (2.246591588s)
I0626 18:35:58.483315  374869 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.949289295
I0626 18:35:58.491514  374869 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.949289295.tar
I0626 18:35:58.499116  374869 build_images.go:207] Built localhost/my-image:functional-900227 from /tmp/build.949289295.tar
I0626 18:35:58.499151  374869 build_images.go:123] succeeded building to: functional-900227
I0626 18:35:58.499156  374869 build_images.go:124] failed building to: 
W0626 18:35:58.502424  374869 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 43da66fa-c96e-40e3-95a1-d6feeb82576e
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.931510046s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-900227
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image load --daemon gcr.io/google-containers/addon-resizer:functional-900227 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 image load --daemon gcr.io/google-containers/addon-resizer:functional-900227 --alsologtostderr: (4.470937022s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image load --daemon gcr.io/google-containers/addon-resizer:functional-900227 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 image load --daemon gcr.io/google-containers/addon-resizer:functional-900227 --alsologtostderr: (3.653366764s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-900227 /tmp/TestFunctionalparallelMountCmdspecific-port1009296047/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900227 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (313.331224ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900227 /tmp/TestFunctionalparallelMountCmdspecific-port1009296047/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900227 ssh "sudo umount -f /mount-9p": exit status 1 (293.821537ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-900227 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900227 /tmp/TestFunctionalparallelMountCmdspecific-port1009296047/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-900227 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3863312010/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-900227 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3863312010/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-900227 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3863312010/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-900227 ssh "findmnt -T" /mount1: exit status 1 (484.921629ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-900227 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900227 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3863312010/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900227 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3863312010/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-900227 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3863312010/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image save gcr.io/google-containers/addon-resizer:functional-900227 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 image save gcr.io/google-containers/addon-resizer:functional-900227 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.661908923s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image rm gcr.io/google-containers/addon-resizer:functional-900227 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.033454334s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-900227
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-900227 image save --daemon gcr.io/google-containers/addon-resizer:functional-900227 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-900227 image save --daemon gcr.io/google-containers/addon-resizer:functional-900227 --alsologtostderr: (2.144728158s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-900227
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.18s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-900227
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-900227
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-900227
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (96.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-022189 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0626 18:36:15.218028  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-022189 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m36.636956429s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (96.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.13s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-022189 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-022189 addons enable ingress --alsologtostderr -v=5: (14.130163439s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-022189 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.34s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-107090 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0626 18:41:32.304093  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-107090 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m8.969952001s)
--- PASS: TestJSONOutput/start/Command (68.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-107090 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-107090 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.67s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-107090 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-107090 --output=json --user=testUser: (5.666276845s)
--- PASS: TestJSONOutput/stop/Command (5.67s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-505682 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-505682 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.396038ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c9400fae-8c99-49cc-bc14-4ebd59aa97ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-505682] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"74af862a-6ac8-4df7-80d3-fa24f3bb791a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16761"}}
	{"specversion":"1.0","id":"5e1a24e6-5529-48bb-8eca-058dc5308775","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1b7e3d57-328e-4d57-87da-99b84a2d54a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig"}}
	{"specversion":"1.0","id":"2cd5d9d7-5537-4c68-b3cf-359faaba691e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube"}}
	{"specversion":"1.0","id":"85f2ef8e-a9fa-4b35-9d22-9c0dbbfc0994","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2000de2c-b09e-4c54-8a3d-b34306949c52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6188ff46-8243-48de-98b6-eb44ffec2991","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-505682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-505682
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-521144 --network=
E0626 18:42:54.224552  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 18:42:56.658966  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 18:42:56.664244  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 18:42:56.674498  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 18:42:56.694787  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 18:42:56.735069  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 18:42:56.815392  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 18:42:56.975796  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 18:42:57.295955  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 18:42:57.936118  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-521144 --network=: (39.081578128s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-521144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-521144
E0626 18:42:59.216770  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-521144: (2.035506888s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.13s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-742690 --network=bridge
E0626 18:43:01.777580  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 18:43:06.898081  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 18:43:17.139072  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-742690 --network=bridge: (25.234462087s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-742690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-742690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-742690: (1.84313798s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.09s)

                                                
                                    
x
+
TestKicExistingNetwork (27.59s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-622906 --network=existing-network
E0626 18:43:31.373702  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:43:37.619275  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-622906 --network=existing-network: (25.626460515s)
helpers_test.go:175: Cleaning up "existing-network-622906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-622906
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-622906: (1.837499224s)
--- PASS: TestKicExistingNetwork (27.59s)

                                                
                                    
x
+
TestKicCustomSubnet (27.48s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-119613 --subnet=192.168.60.0/24
E0626 18:44:18.581003  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-119613 --subnet=192.168.60.0/24: (25.47305312s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-119613 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-119613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-119613
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-119613: (1.991396417s)
--- PASS: TestKicCustomSubnet (27.48s)

                                                
                                    
x
+
TestKicStaticIP (26.58s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-224523 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-224523 --static-ip=192.168.200.200: (24.424077015s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-224523 ip
helpers_test.go:175: Cleaning up "static-ip-224523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-224523
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-224523: (2.046947293s)
--- PASS: TestKicStaticIP (26.58s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (52.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-788457 --driver=docker  --container-runtime=crio
E0626 18:45:10.380848  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-788457 --driver=docker  --container-runtime=crio: (23.340086218s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-791604 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-791604 --driver=docker  --container-runtime=crio: (24.203181533s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-788457
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-791604
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-791604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-791604
E0626 18:45:38.065184  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-791604: (1.840633012s)
helpers_test.go:175: Cleaning up "first-788457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-788457
E0626 18:45:40.501923  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-788457: (2.183668339s)
--- PASS: TestMinikubeProfile (52.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-622029 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-622029 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.511446247s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-622029 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-638061 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-638061 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.044438648s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638061 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-622029 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-622029 --alsologtostderr -v=5: (1.628031662s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638061 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-638061
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-638061: (1.177475472s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-638061
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-638061: (5.870363266s)
--- PASS: TestMountStart/serial/RestartStopped (6.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638061 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-306845 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-306845 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m46.577576052s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-306845 -- rollout status deployment/busybox: (3.421655439s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-c5c5w -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-cxsjd -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-c5c5w -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-cxsjd -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-c5c5w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-306845 -- exec busybox-67b7f59bb-cxsjd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-306845 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-306845 -v 3 --alsologtostderr: (18.108546463s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.69s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp testdata/cp-test.txt multinode-306845:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp multinode-306845:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3670180132/001/cp-test_multinode-306845.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp multinode-306845:/home/docker/cp-test.txt multinode-306845-m02:/home/docker/cp-test_multinode-306845_multinode-306845-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m02 "sudo cat /home/docker/cp-test_multinode-306845_multinode-306845-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp multinode-306845:/home/docker/cp-test.txt multinode-306845-m03:/home/docker/cp-test_multinode-306845_multinode-306845-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m03 "sudo cat /home/docker/cp-test_multinode-306845_multinode-306845-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp testdata/cp-test.txt multinode-306845-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp multinode-306845-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3670180132/001/cp-test_multinode-306845-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp multinode-306845-m02:/home/docker/cp-test.txt multinode-306845:/home/docker/cp-test_multinode-306845-m02_multinode-306845.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845 "sudo cat /home/docker/cp-test_multinode-306845-m02_multinode-306845.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp multinode-306845-m02:/home/docker/cp-test.txt multinode-306845-m03:/home/docker/cp-test_multinode-306845-m02_multinode-306845-m03.txt
E0626 18:48:24.342111  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m03 "sudo cat /home/docker/cp-test_multinode-306845-m02_multinode-306845-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp testdata/cp-test.txt multinode-306845-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp multinode-306845-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3670180132/001/cp-test_multinode-306845-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp multinode-306845-m03:/home/docker/cp-test.txt multinode-306845:/home/docker/cp-test_multinode-306845-m03_multinode-306845.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845 "sudo cat /home/docker/cp-test_multinode-306845-m03_multinode-306845.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 cp multinode-306845-m03:/home/docker/cp-test.txt multinode-306845-m02:/home/docker/cp-test_multinode-306845-m03_multinode-306845-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 ssh -n multinode-306845-m02 "sudo cat /home/docker/cp-test_multinode-306845-m03_multinode-306845-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-306845 node stop m03: (1.187991253s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-306845 status: exit status 7 (448.354905ms)

                                                
                                                
-- stdout --
	multinode-306845
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-306845-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-306845-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-306845 status --alsologtostderr: exit status 7 (443.767642ms)

                                                
                                                
-- stdout --
	multinode-306845
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-306845-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-306845-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 18:48:29.245996  434422 out.go:296] Setting OutFile to fd 1 ...
	I0626 18:48:29.246116  434422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:48:29.246126  434422 out.go:309] Setting ErrFile to fd 2...
	I0626 18:48:29.246131  434422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:48:29.246258  434422 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	I0626 18:48:29.246474  434422 out.go:303] Setting JSON to false
	I0626 18:48:29.246505  434422 mustload.go:65] Loading cluster: multinode-306845
	I0626 18:48:29.246604  434422 notify.go:220] Checking for updates...
	I0626 18:48:29.247291  434422 config.go:182] Loaded profile config "multinode-306845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:48:29.247378  434422 status.go:255] checking status of multinode-306845 ...
	I0626 18:48:29.248594  434422 cli_runner.go:164] Run: docker container inspect multinode-306845 --format={{.State.Status}}
	I0626 18:48:29.264740  434422 status.go:330] multinode-306845 host status = "Running" (err=<nil>)
	I0626 18:48:29.264764  434422 host.go:66] Checking if "multinode-306845" exists ...
	I0626 18:48:29.265136  434422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-306845
	I0626 18:48:29.280379  434422 host.go:66] Checking if "multinode-306845" exists ...
	I0626 18:48:29.280620  434422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0626 18:48:29.280680  434422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845
	I0626 18:48:29.296157  434422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33159 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845/id_rsa Username:docker}
	I0626 18:48:29.386072  434422 ssh_runner.go:195] Run: systemctl --version
	I0626 18:48:29.389955  434422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 18:48:29.400158  434422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:48:29.446130  434422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-06-26 18:48:29.437676682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:48:29.446682  434422 kubeconfig.go:92] found "multinode-306845" server: "https://192.168.58.2:8443"
	I0626 18:48:29.446704  434422 api_server.go:166] Checking apiserver status ...
	I0626 18:48:29.446766  434422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0626 18:48:29.457062  434422 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1409/cgroup
	I0626 18:48:29.465361  434422 api_server.go:182] apiserver freezer: "11:freezer:/docker/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2/crio/crio-dd5fd5120eebed90409e095408287c8bbaed2ca611a89975e7526eb7a7daa1e2"
	I0626 18:48:29.465422  434422 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/15943a8b3a25970a98222bf057392b9053de92fe36c81e94dad2e180423ed8c2/crio/crio-dd5fd5120eebed90409e095408287c8bbaed2ca611a89975e7526eb7a7daa1e2/freezer.state
	I0626 18:48:29.472856  434422 api_server.go:204] freezer state: "THAWED"
	I0626 18:48:29.472895  434422 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0626 18:48:29.478532  434422 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0626 18:48:29.478554  434422 status.go:421] multinode-306845 apiserver status = Running (err=<nil>)
	I0626 18:48:29.478564  434422 status.go:257] multinode-306845 status: &{Name:multinode-306845 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0626 18:48:29.478582  434422 status.go:255] checking status of multinode-306845-m02 ...
	I0626 18:48:29.478821  434422 cli_runner.go:164] Run: docker container inspect multinode-306845-m02 --format={{.State.Status}}
	I0626 18:48:29.494852  434422 status.go:330] multinode-306845-m02 host status = "Running" (err=<nil>)
	I0626 18:48:29.494876  434422 host.go:66] Checking if "multinode-306845-m02" exists ...
	I0626 18:48:29.495148  434422 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-306845-m02
	I0626 18:48:29.511612  434422 host.go:66] Checking if "multinode-306845-m02" exists ...
	I0626 18:48:29.511851  434422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0626 18:48:29.511885  434422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-306845-m02
	I0626 18:48:29.527526  434422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/16761-330054/.minikube/machines/multinode-306845-m02/id_rsa Username:docker}
	I0626 18:48:29.621801  434422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0626 18:48:29.631712  434422 status.go:257] multinode-306845-m02 status: &{Name:multinode-306845-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0626 18:48:29.631750  434422 status.go:255] checking status of multinode-306845-m03 ...
	I0626 18:48:29.632004  434422 cli_runner.go:164] Run: docker container inspect multinode-306845-m03 --format={{.State.Status}}
	I0626 18:48:29.648326  434422 status.go:330] multinode-306845-m03 host status = "Stopped" (err=<nil>)
	I0626 18:48:29.648356  434422 status.go:343] host is not running, skipping remaining checks
	I0626 18:48:29.648365  434422 status.go:257] multinode-306845-m03 status: &{Name:multinode-306845-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 node start m03 --alsologtostderr
E0626 18:48:31.373715  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-306845 node start m03 --alsologtostderr: (10.19414714s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (110.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-306845
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-306845
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-306845: (24.873796277s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-306845 --wait=true -v=8 --alsologtostderr
E0626 18:49:54.420029  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 18:50:10.381171  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-306845 --wait=true -v=8 --alsologtostderr: (1m25.583860188s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-306845
--- PASS: TestMultiNode/serial/RestartKeepsNodes (110.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-306845 node delete m03: (4.028173916s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-306845 stop: (23.663767821s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-306845 status: exit status 7 (81.007693ms)

                                                
                                                
-- stdout --
	multinode-306845
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-306845-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-306845 status --alsologtostderr: exit status 7 (75.01185ms)

                                                
                                                
-- stdout --
	multinode-306845
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-306845-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 18:50:59.430928  444693 out.go:296] Setting OutFile to fd 1 ...
	I0626 18:50:59.431096  444693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:50:59.431105  444693 out.go:309] Setting ErrFile to fd 2...
	I0626 18:50:59.431110  444693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:50:59.431219  444693 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	I0626 18:50:59.431373  444693 out.go:303] Setting JSON to false
	I0626 18:50:59.431400  444693 mustload.go:65] Loading cluster: multinode-306845
	I0626 18:50:59.431435  444693 notify.go:220] Checking for updates...
	I0626 18:50:59.431798  444693 config.go:182] Loaded profile config "multinode-306845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:50:59.431815  444693 status.go:255] checking status of multinode-306845 ...
	I0626 18:50:59.432289  444693 cli_runner.go:164] Run: docker container inspect multinode-306845 --format={{.State.Status}}
	I0626 18:50:59.448967  444693 status.go:330] multinode-306845 host status = "Stopped" (err=<nil>)
	I0626 18:50:59.448985  444693 status.go:343] host is not running, skipping remaining checks
	I0626 18:50:59.448991  444693 status.go:257] multinode-306845 status: &{Name:multinode-306845 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0626 18:50:59.449028  444693 status.go:255] checking status of multinode-306845-m02 ...
	I0626 18:50:59.449268  444693 cli_runner.go:164] Run: docker container inspect multinode-306845-m02 --format={{.State.Status}}
	I0626 18:50:59.464699  444693 status.go:330] multinode-306845-m02 host status = "Stopped" (err=<nil>)
	I0626 18:50:59.464721  444693 status.go:343] host is not running, skipping remaining checks
	I0626 18:50:59.464729  444693 status.go:257] multinode-306845-m02 status: &{Name:multinode-306845-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-306845 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-306845 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m19.135579302s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-306845 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-306845
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-306845-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-306845-m02 --driver=docker  --container-runtime=crio: exit status 14 (62.237304ms)

                                                
                                                
-- stdout --
	* [multinode-306845-m02] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-306845-m02' is duplicated with machine name 'multinode-306845-m02' in profile 'multinode-306845'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-306845-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-306845-m03 --driver=docker  --container-runtime=crio: (20.81244408s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-306845
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-306845: exit status 80 (256.136812ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-306845
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-306845-m03 already exists in multinode-306845-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-306845-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-306845-m03: (1.83405682s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.01s)

                                                
                                    
x
+
TestPreload (154.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-159099 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0626 18:52:56.659674  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 18:53:31.373361  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-159099 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m11.49967284s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-159099 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-159099 image pull gcr.io/k8s-minikube/busybox: (2.483962195s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-159099
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-159099: (5.674258017s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-159099 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0626 18:55:10.381055  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-159099 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m12.428458622s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-159099 image list
helpers_test.go:175: Cleaning up "test-preload-159099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-159099
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-159099: (2.290537072s)
--- PASS: TestPreload (154.58s)

                                                
                                    
x
+
TestScheduledStopUnix (96.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-539748 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-539748 --memory=2048 --driver=docker  --container-runtime=crio: (20.407788652s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-539748 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-539748 -n scheduled-stop-539748
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-539748 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-539748 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-539748 -n scheduled-stop-539748
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-539748
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-539748 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0626 18:56:33.425558  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-539748
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-539748: exit status 7 (60.426893ms)

                                                
                                                
-- stdout --
	scheduled-stop-539748
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-539748 -n scheduled-stop-539748
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-539748 -n scheduled-stop-539748: exit status 7 (60.311042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-539748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-539748
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-539748: (5.059367933s)
--- PASS: TestScheduledStopUnix (96.68s)

                                                
                                    
x
+
TestInsufficientStorage (12.83s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-390914 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-390914 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.520275111s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a4003dd2-b38b-4ca6-acbc-b06afc749567","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-390914] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d7318e4-3781-48eb-863c-38795a66fcb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16761"}}
	{"specversion":"1.0","id":"77812f39-d6e0-4016-9db4-30c69cfe7018","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"53691bc6-803d-4b28-8516-5fe58aee60e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig"}}
	{"specversion":"1.0","id":"63cc570c-dea9-4496-ba11-335fa8fc9e3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube"}}
	{"specversion":"1.0","id":"288665ff-29c0-4d26-b175-c1ce5d21fae2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"09d7a80f-a405-4b85-82bb-de0e3b02352a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e005b632-cdcc-4bd3-8a24-5384524c8590","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"19f91460-b0e4-4725-8eca-d8aa8a631874","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b34c7af1-a29f-418b-81e5-f6ada55067ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"53916680-7e14-49d5-9b24-940113a57175","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f74c6282-5ce0-48d9-b90e-901bb37cf094","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-390914 in cluster insufficient-storage-390914","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"65ae06c8-deda-41ce-b4d6-9b09f36a2060","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"511e5e1e-b771-465d-aeb5-18a2a586965b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2f851507-29f4-49b0-8579-d820f3f787ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-390914 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-390914 --output=json --layout=cluster: exit status 7 (251.121977ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-390914","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-390914","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 18:57:09.612244  466058 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-390914" does not appear in /home/jenkins/minikube-integration/16761-330054/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-390914 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-390914 --output=json --layout=cluster: exit status 7 (258.351471ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-390914","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-390914","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0626 18:57:09.871420  466145 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-390914" does not appear in /home/jenkins/minikube-integration/16761-330054/kubeconfig
	E0626 18:57:09.881066  466145 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/insufficient-storage-390914/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-390914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-390914
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-390914: (1.801517684s)
--- PASS: TestInsufficientStorage (12.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-373057 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-373057 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.853642652s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-373057
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-373057: (1.222192251s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-373057 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-373057 status --format={{.Host}}: exit status 7 (76.832368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-373057 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-373057 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m31.95540527s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-373057 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-373057 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-373057 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (63.944162ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-373057] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-373057
	    minikube start -p kubernetes-upgrade-373057 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3730572 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-373057 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-373057 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-373057 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.482016304s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-373057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-373057
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-373057: (2.277684852s)
--- PASS: TestKubernetesUpgrade (350.99s)

                                                
                                    
x
+
TestMissingContainerUpgrade (174.92s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.0.665074732.exe start -p missing-upgrade-787694 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.9.0.665074732.exe start -p missing-upgrade-787694 --memory=2200 --driver=docker  --container-runtime=crio: (1m39.510265331s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-787694
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-787694: (10.501825665s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-787694
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-787694 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-787694 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m0.655091343s)
helpers_test.go:175: Cleaning up "missing-upgrade-787694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-787694
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-787694: (1.985407435s)
--- PASS: TestMissingContainerUpgrade (174.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-966554 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-966554 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (148.173088ms)

                                                
                                                
-- stdout --
	* [false-966554] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0626 18:57:14.866586  467774 out.go:296] Setting OutFile to fd 1 ...
	I0626 18:57:14.866715  467774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:57:14.866724  467774 out.go:309] Setting ErrFile to fd 2...
	I0626 18:57:14.866729  467774 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0626 18:57:14.866840  467774 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16761-330054/.minikube/bin
	I0626 18:57:14.867425  467774 out.go:303] Setting JSON to false
	I0626 18:57:14.869031  467774 start.go:127] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":5985,"bootTime":1687799850,"procs":492,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0626 18:57:14.869185  467774 start.go:137] virtualization: kvm guest
	I0626 18:57:14.872387  467774 out.go:177] * [false-966554] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0626 18:57:14.874249  467774 out.go:177]   - MINIKUBE_LOCATION=16761
	I0626 18:57:14.874298  467774 notify.go:220] Checking for updates...
	I0626 18:57:14.875750  467774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0626 18:57:14.877144  467774 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	I0626 18:57:14.878421  467774 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	I0626 18:57:14.879802  467774 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0626 18:57:14.881044  467774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0626 18:57:14.882821  467774 config.go:182] Loaded profile config "missing-upgrade-787694": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0626 18:57:14.882989  467774 config.go:182] Loaded profile config "offline-crio-715574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0626 18:57:14.883086  467774 config.go:182] Loaded profile config "stopped-upgrade-735296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0626 18:57:14.883205  467774 driver.go:373] Setting default libvirt URI to qemu:///system
	I0626 18:57:14.907279  467774 docker.go:121] docker version: linux-24.0.2:Docker Engine - Community
	I0626 18:57:14.907373  467774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0626 18:57:14.958186  467774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2023-06-26 18:57:14.948804248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1036-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0626 18:57:14.958288  467774 docker.go:294] overlay module found
	I0626 18:57:14.960489  467774 out.go:177] * Using the docker driver based on user configuration
	I0626 18:57:14.962081  467774 start.go:297] selected driver: docker
	I0626 18:57:14.962103  467774 start.go:954] validating driver "docker" against <nil>
	I0626 18:57:14.962121  467774 start.go:965] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0626 18:57:14.964805  467774 out.go:177] 
	W0626 18:57:14.966342  467774 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0626 18:57:14.967683  467774 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-966554 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-966554" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-966554" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-966554

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-966554"

                                                
                                                
----------------------- debugLogs end: false-966554 [took: 5.723525277s] --------------------------------
helpers_test.go:175: Cleaning up "false-966554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-966554
--- PASS: TestNetworkPlugins/group/false (6.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-735296
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.50s)

                                                
                                    
x
+
TestPause/serial/Start (44.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-943129 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0626 18:59:19.702365  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-943129 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.197890527s)
--- PASS: TestPause/serial/Start (44.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-943129 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-943129 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.136092865s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-843364 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-843364 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (62.753007ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-843364] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16761
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16761-330054/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16761-330054/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-843364 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-843364 --driver=docker  --container-runtime=crio: (25.461651448s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-843364 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.78s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-943129 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-943129 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-943129 --output=json --layout=cluster: exit status 2 (331.005491ms)

                                                
                                                
-- stdout --
	{"Name":"pause-943129","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-943129","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-943129 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-943129 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.72s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-943129 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-943129 --alsologtostderr -v=5: (2.721193037s)
--- PASS: TestPause/serial/DeletePaused (2.72s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-943129
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-943129: exit status 1 (15.711284ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-943129: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-843364 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-843364 --no-kubernetes --driver=docker  --container-runtime=crio: (6.547108475s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-843364 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-843364 status -o json: exit status 2 (288.315056ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-843364","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-843364
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-843364: (1.903857494s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m10.461274986s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-843364 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-843364 --no-kubernetes --driver=docker  --container-runtime=crio: (4.378251984s)
--- PASS: TestNoKubernetes/serial/Start (4.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-843364 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-843364 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.972527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-843364
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-843364: (1.190893033s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-843364 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-843364 --driver=docker  --container-runtime=crio: (6.442139092s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-843364 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-843364 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.96959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.649798614s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-966554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-966554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hrhvq" [48d3ec63-e4ac-418f-9c38-1317defdce6a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-hrhvq" [48d3ec63-e4ac-418f-9c38-1317defdce6a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006613175s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-966554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-pttb9" [bc30673d-e290-4c6a-910b-ab2189a4d4f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.017485347s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m9.992996587s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-966554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-966554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-586fc" [093f754b-5ba2-45f6-bdc3-3f84a861a37e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-586fc" [093f754b-5ba2-45f6-bdc3-3f84a861a37e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007610378s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (57.268254579s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-966554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0626 19:02:56.658851  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.144635479s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-966554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-966554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-xrkzg" [507be086-1079-4909-bb86-c3bbb669dd33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-xrkzg" [507be086-1079-4909-bb86-c3bbb669dd33] Running
E0626 19:03:31.373645  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.006324342s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2vh8c" [c58b99a8-2357-42b3-acb3-e47cee4f973c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.019910009s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-966554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-966554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hcfhf" [95f50e2b-4277-4441-9752-e9e37284c2c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-hcfhf" [95f50e2b-4277-4441-9752-e9e37284c2c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.007789224s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-966554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-966554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-658rq" [00d0533e-0fe3-4645-95fe-63281f40deaa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.017784982s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m18.884983254s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-966554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-966554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-fs55q" [9a858cb0-c1e3-4935-8f36-8e033348b1a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-fs55q" [9a858cb0-c1e3-4935-8f36-8e033348b1a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.008409751s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-966554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (50.577529982s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-966554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (116.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-071001 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-071001 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m56.572309096s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (116.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-966554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-966554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-2hcts" [93491b5b-47aa-4214-94d2-0d1f5d79312a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-2hcts" [93491b5b-47aa-4214-94d2-0d1f5d79312a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00740635s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-966554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-966554 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-966554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-xttgn" [701b2650-21ca-47c4-b05c-940a8f340528] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-xttgn" [701b2650-21ca-47c4-b05c-940a8f340528] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.008568877s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-966554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-966554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-686349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-686349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m4.539885168s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-260593 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-260593 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m11.629116474s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-460457 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-460457 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m9.939355789s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-071001 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [82007aff-d67d-407b-a4b1-e8abfdf9f8c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [82007aff-d67d-407b-a4b1-e8abfdf9f8c5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.013419972s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-071001 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-686349 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9262f916-36a9-4a37-8ddc-6dae7da08f2a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0626 19:06:34.420603  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
helpers_test.go:344: "busybox" [9262f916-36a9-4a37-8ddc-6dae7da08f2a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.014281694s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-686349 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-071001 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-071001 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-071001 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-071001 --alsologtostderr -v=3: (11.989972332s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-686349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-686349 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-686349 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-686349 --alsologtostderr -v=3: (11.948198811s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071001 -n old-k8s-version-071001
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071001 -n old-k8s-version-071001: exit status 7 (60.258632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-071001 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (432.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-071001 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0626 19:06:51.977776  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:06:51.983053  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:06:51.993285  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:06:52.013561  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:06:52.053837  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:06:52.134178  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:06:52.294611  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:06:52.615408  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:06:53.255559  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:06:54.536433  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-071001 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m11.93286498s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071001 -n old-k8s-version-071001
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (432.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-686349 -n no-preload-686349
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-686349 -n no-preload-686349: exit status 7 (85.125393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-686349 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (332.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-686349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0626 19:06:57.097620  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-686349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (5m32.153215795s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-686349 -n no-preload-686349
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (332.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-260593 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c0feb474-8c9c-4905-bafe-fc5a721d5648] Pending
helpers_test.go:344: "busybox" [c0feb474-8c9c-4905-bafe-fc5a721d5648] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0626 19:07:02.217818  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c0feb474-8c9c-4905-bafe-fc5a721d5648] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.143135846s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-260593 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-460457 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [83a97007-2e1f-41d3-b32b-3ef86e1d689d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [83a97007-2e1f-41d3-b32b-3ef86e1d689d] Running
E0626 19:07:16.469497  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:07:16.474795  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:07:16.485117  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:07:16.505418  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:07:16.545716  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:07:16.626184  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:07:16.786695  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:07:17.107330  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:07:17.748192  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.014221887s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-460457 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-260593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-260593 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-260593 --alsologtostderr -v=3
E0626 19:07:12.457988  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-260593 --alsologtostderr -v=3: (12.06316498s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-460457 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0626 19:07:19.028993  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-460457 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-460457 --alsologtostderr -v=3
E0626 19:07:21.589907  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-460457 --alsologtostderr -v=3: (11.919470618s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-260593 -n embed-certs-260593
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-260593 -n embed-certs-260593: exit status 7 (68.626626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-260593 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (590.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-260593 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0626 19:07:26.710579  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-260593 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (9m49.738536472s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-260593 -n embed-certs-260593
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (590.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-460457 -n default-k8s-diff-port-460457
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-460457 -n default-k8s-diff-port-460457: exit status 7 (61.413943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-460457 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (585.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-460457 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0626 19:07:32.939141  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:07:36.951202  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:07:56.658805  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 19:07:57.432181  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:08:13.899993  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:08:25.556628  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:25.561911  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:25.572148  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:25.592427  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:25.632711  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:25.713031  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:25.873440  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:26.193595  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:26.834706  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:28.114883  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:30.213010  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:30.218264  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:30.228514  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:30.248799  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:30.289714  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:30.370090  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:30.530627  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:30.675928  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:30.851288  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:31.373001  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
E0626 19:08:31.492234  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:32.773319  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:35.333720  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:35.796483  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:38.392952  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:08:40.454583  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:46.037259  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:08:50.694863  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:08:54.191510  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:08:54.196804  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:08:54.207080  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:08:54.227397  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:08:54.267731  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:08:54.348196  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:08:54.508494  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:08:54.828852  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:08:55.469365  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:08:56.750013  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:08:59.310545  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:09:04.430697  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:09:06.517910  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:09:11.175752  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:09:14.671786  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:09:35.152780  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:09:35.821122  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:09:47.478316  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:09:52.136212  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:09:57.826113  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:09:57.831389  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:09:57.841686  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:09:57.861956  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:09:57.902223  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:09:57.982545  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:09:58.142847  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:09:58.463605  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:09:59.104320  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:10:00.313587  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:10:00.385058  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:10:02.945826  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:10:08.066658  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:10:10.381165  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 19:10:14.684520  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:14.689799  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:14.700056  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:14.720332  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:14.760608  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:14.840953  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:15.001400  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:15.321996  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:15.962961  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:16.113401  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:10:17.243576  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:18.307323  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:10:19.804588  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:24.924830  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:35.165828  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:10:38.787762  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:10:55.646559  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:11:09.398701  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:11:14.056991  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:11:19.748756  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:11:36.607624  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:11:38.033979  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:11:51.978092  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
E0626 19:12:16.469367  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
E0626 19:12:19.662148  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/auto-966554/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-460457 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (9m45.452245909s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-460457 -n default-k8s-diff-port-460457
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (585.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mmgff" [4abfce6a-eea7-4dce-8337-757e3918c09e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mmgff" [4abfce6a-eea7-4dce-8337-757e3918c09e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.014247228s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mmgff" [4abfce6a-eea7-4dce-8337-757e3918c09e] Running
E0626 19:12:41.668972  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/enable-default-cni-966554/client.crt: no such file or directory
E0626 19:12:44.154089  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006854472s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-686349 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-686349 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-686349 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-686349 --alsologtostderr -v=1: (1.15187244s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-686349 -n no-preload-686349
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-686349 -n no-preload-686349: exit status 2 (399.709244ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-686349 -n no-preload-686349
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-686349 -n no-preload-686349: exit status 2 (328.134522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-686349 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-686349 -n no-preload-686349
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-686349 -n no-preload-686349
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-231920 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0626 19:12:56.659463  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/ingress-addon-legacy-022189/client.crt: no such file or directory
E0626 19:12:58.528756  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/bridge-966554/client.crt: no such file or directory
E0626 19:13:13.426167  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/functional-900227/client.crt: no such file or directory
E0626 19:13:25.556450  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:13:30.213064  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
E0626 19:13:31.372724  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/addons-052687/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-231920 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (41.918871696s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-231920 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-231920 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-231920 --alsologtostderr -v=3: (2.134783022s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-231920 -n newest-cni-231920
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-231920 -n newest-cni-231920: exit status 7 (72.545575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-231920 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-231920 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0626 19:13:53.239231  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/custom-flannel-966554/client.crt: no such file or directory
E0626 19:13:54.192240  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/flannel-966554/client.crt: no such file or directory
E0626 19:13:57.897444  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/calico-966554/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-231920 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (25.365009887s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-231920 -n newest-cni-231920
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-sfdkh" [cf5ad427-3c63-4410-8d31-4d38788b7e56] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015716928s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-231920 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-231920 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-231920 -n newest-cni-231920
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-231920 -n newest-cni-231920: exit status 2 (284.721739ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-231920 -n newest-cni-231920
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-231920 -n newest-cni-231920: exit status 2 (286.666337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-231920 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-231920 -n newest-cni-231920
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-231920 -n newest-cni-231920
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-sfdkh" [cf5ad427-3c63-4410-8d31-4d38788b7e56] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006832278s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-071001 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-071001 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-071001 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071001 -n old-k8s-version-071001
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071001 -n old-k8s-version-071001: exit status 2 (283.187423ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-071001 -n old-k8s-version-071001
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-071001 -n old-k8s-version-071001: exit status 2 (279.165264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-071001 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071001 -n old-k8s-version-071001
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-071001 -n old-k8s-version-071001
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-nngj8" [494fa85d-9aa4-4ff4-b7d5-d1107b4f8d1f] Running
E0626 19:17:13.171875  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/no-preload-686349/client.crt: no such file or directory
E0626 19:17:16.469022  336935 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16761-330054/.minikube/profiles/kindnet-966554/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012670022s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-nngj8" [494fa85d-9aa4-4ff4-b7d5-d1107b4f8d1f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007240089s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-260593 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-wj2f9" [fa33f0aa-7669-4ba1-a03d-556d266605f2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013325638s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-wj2f9" [fa33f0aa-7669-4ba1-a03d-556d266605f2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007567717s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-460457 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-260593 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-260593 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-260593 -n embed-certs-260593
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-260593 -n embed-certs-260593: exit status 2 (274.682868ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-260593 -n embed-certs-260593
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-260593 -n embed-certs-260593: exit status 2 (279.519352ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-260593 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-260593 -n embed-certs-260593
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-260593 -n embed-certs-260593
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-460457 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-460457 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-460457 -n default-k8s-diff-port-460457
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-460457 -n default-k8s-diff-port-460457: exit status 2 (270.633777ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-460457 -n default-k8s-diff-port-460457
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-460457 -n default-k8s-diff-port-460457: exit status 2 (269.957019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-460457 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-460457 -n default-k8s-diff-port-460457
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-460457 -n default-k8s-diff-port-460457
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                    

Test skip (23/303)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-966554 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-966554" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-966554" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-966554

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-966554"

                                                
                                                
----------------------- debugLogs end: kubenet-966554 [took: 2.99810356s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-966554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-966554
--- SKIP: TestNetworkPlugins/group/kubenet (3.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-966554 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-966554" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-966554

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-966554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-966554"

                                                
                                                
----------------------- debugLogs end: cilium-966554 [took: 3.832138841s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-966554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-966554
--- SKIP: TestNetworkPlugins/group/cilium (3.99s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-722348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-722348
--- SKIP: TestStartStop/group/disable-driver-mounts (0.37s)

                                                
                                    
Copied to clipboard